Knowledge baseCase BreakdownsThis article

How We Built Pulse in One Day

Pulse is the SaaS analytics demo on our homepage. It includes a dashboard, user authentication, charting, and a realistic data model. We built it in one working day. Here is exactly how.

Pulse was built as a demonstration of what AI-native development looks like in practice. The goal was not a toy. It needed to be something a founder would recognize as representative of a real product: proper auth, useful data, a design that felt like a serious tool.

The brief (9:00 AM)

We started with a one-paragraph brief: a SaaS analytics dashboard for a hypothetical B2B product. It should show key metrics (MRR, active users, churn), have a chart of growth over time, include a user table, and require authentication to access. It should look like a real product, not a tutorial project.

We wrote the full spec in about 45 minutes. That included: data model (users, sessions, metrics, events), API endpoints, page layout, authentication flow, chart type and data shape, and success criteria (can log in, can see the dashboard, numbers look real).

Stack decisions (made in the spec)

Next.js App Router with Tailwind for the frontend. Express API with SQLite for the backend. JWT authentication. Recharts for the data visualization. Seed data generated programmatically to produce realistic-looking growth curves.

We made one deliberate simplification: no real payment integration. The subscription data is all seeded. This is a demo, and adding Stripe would have taken 3 hours for zero user-facing value.

Agent execution (10:00 AM to 3:00 PM)

The agent ran for approximately 5 hours of wall time across two sessions. In the first session, it scaffolded the project, implemented the data model, built the API layer, and set up authentication. In the second session, it built the frontend components, wired the API calls, and implemented the charts.

The agent produced approximately 2,800 lines of code across 31 files. Of that, we manually edited about 200 lines: adjustments to the color palette, a tweak to the chart tooltip format, and one fix to the authentication redirect behavior that the agent had interpreted differently than we intended.

Human interventions

We intervened four times during the build:

Once to clarify the chart date range (the spec said “growth over time” and the agent defaulted to 30 days; we wanted 90 days with a monthly grouping).

Once to correct the color of the metric cards. The agent used the system accent color for all of them; we wanted differentiated colors per metric type.

Once to fix a layout issue on mobile where the sidebar overlapped the main content area at certain screen widths. This was a CSS judgment call the agent could not make without seeing the result.

Once to adjust the seed data algorithm. The initial growth curve was too smooth and did not look realistic. We specified a formula that added variance to make it read as real data.

Deployment (3:30 PM)

We deployed to a Cloudflare Workers environment for the API and Vercel for the frontend. The deployment took about 20 minutes including the environment variable setup and the DNS configuration. The demo was live before 4 PM.

What the numbers look like

Total time: 7 hours from blank spec to deployed product. Human time: about 2.5 hours (spec writing, 4 interventions, review, deployment). Agent time: 5 hours. Lines of code produced by agent: ~2,800. Lines manually edited: ~200 (about 7%).

A similar project built traditionally by a solo developer would take 3 to 5 days. With a two-person team, 2 to 3 days. The primary compression came from the agent working continuously while the human was available but not blocked.

What we learned

The spec quality directly predicted the correction rate. The sections of the spec we had written most precisely (the data model, the API endpoints) produced code that needed almost no adjustment. The sections we had been vague about (visual design, chart formatting) required the most correction.

Mobile layout decisions consistently require human review. The agent produces correct implementations for the desktop view it can reason about from the spec, but breakpoint behavior requires visual judgment.

Everything else was straightforward. The agent handled dependency management, error handling, loading states, form validation, and API error responses correctly without specific instruction for each case. These things just came with the spec.

Read next