Skip to main contentMercury Labs

Primary Bid

Archie Norman
Archie Norman
We deployed a fully operational data pipeline for Primary Bid, a company dedicated to ensuring that public markets are open, inclusive, transparent, and fair for all investors.

Primary Bid is dedicated to ensuring that public markets are open, inclusive, transparent, and fair for all investors, allowing them to easily access and invest in IPOs, Follow-ons, Investment trusts, bonds, and SPACs. By providing an innovative, cutting-edge platform that seamlessly connects companies to their investors, Primary Bid is revolutionising the way people invest in the public markets, enabling them to make informed decisions and gain access to new opportunities.

We deployed a fully operational data pipeline for consuming events generated at various points within their mobile/web applications and backend services where appropriate.

Value Added

  • Data infrastructure able to support around 1 million users in a single day.
  • Near realtime board level insights on the IPO performance.
  • Ability to identify bottlenecks in share registration process
  • Marketing analytics to improve the clients service.

The Deliverable

The client requested we build a fully fledged data infrastructure to help provide (near) realtime analytics on X’s IPO. We delivered visual dashboards to the client showing:

  • Realtime registrations
  • Live funnel progression
  • Total USD value generated
  • Referrer attribution
  • Sessions by Location
  • Registrations vs Sessions
  • Onboarding percentiles
  • Data pipelines health and latency
  • Transaction statistics
  • Brokerage Statistics
  • Payment errors
Please note

Please note

The process

We broke the data pipeline specific part of the project into 6 stages. Please note, we expected to receive events from circa 1 million users in less than one day as a technical requirement.

  • Trackers: the tracker is the component that runs in the customer environment, generates and sends snowplow events to the pipeline collectors.
  • Collectors: the collector is the service that ran in the client environment and receives events from trackers. The collector receives the events in raw form, ready to be processed in the next step.
  • Enricher: consumes raw events from the collector data stream, it enriches, validates and publishes them as json.
  • Iglu: a is a machine-readable, open-source schema repository for JSON Schema. The Iglu registry acts as a store of data schemas
  • Loader: we chose the load the enriched events out of the pipeline into Elasticsearch and s3.
  • Visualisation: our dashboarding was built using Grafana. It provided the toolset we need to explain our data to the Client.