Ada Lovelace’s notes on the Analytical Engine described a repeatable algorithm. See how her early ideas map to modern program design and thinking.

You’ve probably heard the headline version: Ada Lovelace wrote “the first algorithm,” a set of instructions intended for Charles Babbage’s Analytical Engine. People still cite it because it’s an early, surprisingly clear example of what we now call programming—separating a goal into precise steps a machine could follow.
This article isn’t trying to recreate the Engine’s gears or prove every historical claim beyond dispute. Instead, it focuses on the programming ideas inside Lovelace’s work: how you turn a math problem into something executable, how you represent data, and how you communicate a procedure so someone else (or something else) can run it.
Lovelace’s famous “Notes” read like a bridge between mathematics and software design. Even though the machine was largely hypothetical, the thinking is familiar to anyone who has ever tried to make a computer do something reliable.
Here’s what we’ll keep an eye on as we go:
By the end, the goal is simple: see Lovelace’s “first algorithm” less as a museum piece and more as an early template for computational thinking that still mirrors how we design programs today.
Augusta Ada King, Countess of Lovelace—better known as Ada Lovelace—grew up at a crossroads of poetry and mathematics. Her mother encouraged rigorous study, and Ada quickly became part of a small circle of prominent scientists and thinkers. She wasn’t a lone genius working in isolation; she was a gifted collaborator who asked unusually clear questions about what machines could mean, not just what they could do.
Charles Babbage was already famous for his plans for mechanical calculation when Ada met him. Babbage could design hardware in his head: gears, shafts, and number wheels arranged into a system. Ada, meanwhile, had a talent for explanation—taking complex technical ideas and translating them into structured, communicable concepts.
Their relationship worked because their strengths were different. Babbage pushed the engineering vision forward; Ada pushed the conceptual vision forward, especially the idea that a machine could follow a sequence of operations that someone designs in advance.
Babbage’s Analytical Engine was not just a better calculator. On paper, it described a general-purpose machine: one that could store values, perform operations, and run a planned procedure step by step. Think of it as an early blueprint for what we now call a programmable computer—even though it was never completed in their lifetime.
The 1840s were a moment when mathematics, industry, and automation were starting to overlap. People were hungry for reliable methods—tables, formulas, and repeatable procedures—because errors were expensive and science was accelerating. In that context, Ada’s interest in “how to instruct a machine” wasn’t a curiosity. It was a timely response to a growing need: turning human reasoning into repeatable, checkable processes.
Before Ada Lovelace could describe an algorithm, there had to be a machine worth “programming.” Charles Babbage’s Analytical Engine was conceived as a general-purpose calculator: not a device for one specific formula, but a machine that could be set up to carry out many different sequences of operations.
The core idea was straightforward: if you can break a problem into small arithmetic steps (add, subtract, multiply, divide), a machine should be able to perform those steps reliably, in the right order, as many times as needed.
That’s the leap from a one-off calculation to a reusable method.
Babbage described two main components:
For input and output, the Engine was designed to take instructions and data using punched cards (inspired by weaving looms), and to produce results in a human-usable form—printed or otherwise recorded.
If you map those ideas to today:
This is why the Analytical Engine matters: it sketches the same separation we still rely on—hardware that can execute steps, and programs that define which steps to execute.
When people talk about Ada Lovelace and the first algorithm, they’re often pointing to a specific set of add-ons: the “Notes” she appended to her English translation of Luigi Menabrea’s paper about Charles Babbage’s Analytical Engine.
Menabrea described the machine’s concept. Lovelace went further: she treated the Engine as something you could instruct—not just admire. That shift is why these Notes matter so much in programming history. They read like early computational thinking: breaking a goal into precise steps, choosing representations, and anticipating how a mechanism will follow them.
Lovelace’s Notes explain what we’d now call program design. She describes the Engine’s parts (like a memory store and a processing “mill”) in terms of how operations could be sequenced and controlled. The central idea is simple but profound: if the Analytical Engine can perform operations in a defined order on defined symbols, then the “how” must be written down in a form the machine can execute.
This is where her work starts to resemble modern programming. It’s not just theory; it’s method.
Most importantly, the Notes include a worked example presented as a table of steps. It lays out, line by line, what the machine should do—what values are in which locations, what operation happens next, and where results are stored.
That table format is an ancestor of today’s pseudocode, flowcharts, and instruction schedules: an explicit, checkable plan you can follow without guessing. Whether or not you ever build an Analytical Engine, the habit it teaches—turning an idea into an executable sequence—is still the heart of writing software.
An algorithm, in everyday language, is a repeatable method: a set of clear steps that reliably takes you from a starting point to an answer. It’s like a recipe that doesn’t depend on intuition—if you follow the steps, you should get the same result every time.
Ada Lovelace’s famous example algorithm aimed to calculate Bernoulli numbers—a sequence of values that shows up in many areas of mathematics (for example, formulas for sums like 1 + 2 + … + n, and in parts of calculus). You don’t need to know the theory behind them to appreciate why they’re a great “test case” for an early computing machine.
They’re challenging in the right way:
In other words, it’s complex enough to prove the machine can follow a structured method, but still orderly enough to be written down as steps.
At its core, the algorithm has a familiar structure we still use in programs:
Seen this way, Lovelace wasn’t just pointing to a number being computed—she was showing how to organize a multi-step calculation so a machine could execute it without guessing.
When people talk about Lovelace’s Bernoulli numbers algorithm, they often focus on the result (“an early program”) rather than the design work that makes the steps reliable. The real achievement isn’t just listing operations—it’s shaping them so a machine can follow them without improvising.
Instead of treating “compute Bernoulli numbers” as one task, the Notes break it into smaller parts that can be repeated and checked: compute intermediate values, combine them in a specific formula, record results, and then move on to the next case.
That decomposition matters because each subtask can be validated in isolation. If an output looks wrong, you don’t debug “the whole algorithm”; you inspect one piece.
A mechanical computer doesn’t “keep things in mind.” Every value that will be needed later must be stored somewhere, and the Notes are careful about that. Some numbers are temporary working values; others are final results that must persist for later steps.
This is an early form of thinking about program state:
The order of operations is a safety feature. Certain calculations must happen before others, not for elegance, but to avoid using an unprepared value or accidentally overwriting something still needed.
In modern terms, Lovelace is designing control flow so the program has a clear path: do A, then B, then C—because doing B first would silently produce the wrong answer.
One of the most “modern” ideas hiding in Lovelace’s step table is repetition: the ability to do the same set of instructions again and again, not because you’re stuck, but because repeating is the fastest path to a result.
Repetition in a program means: follow a small recipe of steps, check whether you’re done, and if not, run the same recipe again. The key is that something changes each time—often a counter, a position in a table, or the value you’re building up—so the program moves toward a finish line.
In Lovelace’s notation, you can see this as a structured return to earlier steps. Rather than rewriting identical instructions many times, she describes a pattern and indicates when to cycle back. That’s the seed of what we now call iteration.
If you’ve written code, you’ve seen this pattern as a for loop (“repeat this N times”) or a while loop (“repeat until a condition is true”). Her table also implies familiar loop ingredients:
Imagine you want the sum of 1 through 5.
total = 0i = 1i to totali by 1i is still 5 or less, repeat the add-and-increase stepsThis is iteration in plain terms: a small loop that updates a counter and accumulates a result. Lovelace’s contribution wasn’t only what she computed—it was showing that repeating structure can be written down clearly enough for a machine (and future humans) to execute reliably.
A procedure can be perfectly logical in your head and still be impossible for a machine—or another person—to follow without a way to refer to changing quantities. That’s where variables and notation matter.
Think of a variable as a labeled box on a desk. The label stays the same, but what’s inside can change as you work.
If you’re computing a sequence, you might have:
Without those boxes, you’re forced to describe everything in long sentences (“take the number you just computed two steps ago…”), which quickly turns into a tangle.
In Lovelace’s Notes, the symbols and labels aren’t there to look formal—they’re there to make the process executable. Clear notation answers practical questions:
When procedures get long, these small clarifications prevent the most common error: mixing up similar-looking quantities.
Good variable naming is still one of the cheapest ways to reduce bugs. Compare x1, x2, x3 with current_sum, term_index, and next_term: the second set tells you what the boxes are for.
Types add another layer of safety. Deciding whether something is an integer, a decimal, a list, or a record is like choosing the right kind of container—some mistakes become impossible, or at least easier to catch early.
Variables and notation turn “a clever idea” into steps that can be repeated correctly, by anyone (including a machine).
Abstraction means focusing on what matters and intentionally hiding the details that don’t. It’s the difference between saying “sort this list” and describing every swap and comparison by hand. Lovelace’s Notes show this instinct early: they aim to communicate a method clearly, without forcing the reader to get stuck in the engine’s mechanical specifics.
A striking feature of the Notes is how they keep the core idea independent from the machine’s physical actions. The Analytical Engine has its own “how” (gears, store, mill), but the Notes emphasize the “what”: the sequence of operations needed to reach a result.
That separation is the seed of what we now call software design:
When you can describe the method without re-explaining the machine, you’re already treating computation as something portable—capable of being re-implemented on different hardware, or by different people.
The step-by-step tables in the Notes resemble early “procedures”: a defined set of steps that can be followed again and again. Modern code formalizes this as functions, modules, and reusable components.
A good function does what Lovelace’s presentation does:
This is why abstraction isn’t about being vague—it’s about being usable. Reuse follows naturally: once a method is expressed cleanly, you can call it again in a new context, combine it with other methods, and build larger systems without drowning in details.
Ada Lovelace didn’t just describe what the Analytical Engine could do—she showed how to make a procedure unambiguous for another person (or machine) to follow. That’s the quiet power of her Notes: they treat explanation as part of the work, not decoration.
One reason her presentation still feels modern is the use of structured, step-by-step tables. A table forces decisions that vague prose can hide:
That reduces ambiguity in the same way pseudocode does today. You can read a paragraph and think you understand it—until you try to execute it. A step table makes the “execution path” visible, which is exactly what good program documentation aims to do.
Lovelace’s Notes mix three things we still try to keep together:
What the program is for (intent)
How it works (the procedure)
How to interpret the notation (the interface—names, symbols, assumptions)
That maps neatly to modern comments, docstrings, and READMEs. A README explains the goal and context. Inline comments clarify tricky steps. Docstrings define inputs/outputs and edge cases. When any one of these is missing, users are left guessing—and guessing is where bugs breed.
When you document a process (code or not), write as if someone will reproduce it without you:
That’s not extra work—it’s how a method becomes reusable.
Ada Lovelace is often introduced with a bold label: “the first programmer.” It’s a useful shorthand, but it can also flatten a more interesting truth. The debate isn’t just about pride of place—it’s about what we mean by program, computer, and authorship.
If “programmer” means someone who wrote instructions intended for a general-purpose machine, Lovelace has a strong claim. In her Notes on the Analytical Engine, she described a step-by-step method for generating Bernoulli numbers—essentially a plan for how the Engine could carry out a non-trivial calculation.
But historians debate the label because:
It’s important to separate inventing a computing idea from building a working computer. Babbage’s major contribution was architectural: a proposed machine with memory (“store”), a processor (“mill”), and control via punched cards. Lovelace’s contribution was interpretive and expressive: she clarified what such a machine could represent and how a procedure could be written down so the machine could follow it.
A program doesn’t stop being a program because the hardware never shipped. In modern terms, it’s like writing software for a platform that’s still theoretical—or specifying an algorithm before the chip exists.
A respectful way to talk about this era is to treat it as a collaboration across roles:
What we can say confidently: Lovelace’s Notes helped define what programming is—not merely calculation, but the careful expression of a process that a machine could carry out.
Lovelace’s Notes matter because they show how to think when turning an idea into a machine-executable plan. Even if you never touch punch cards or mechanical gears, the core lessons still map neatly to modern program design: give the work a clear structure, name things carefully, use repetition intentionally, and build reusable pieces.
Structure beats cleverness. A program is easier to build and maintain when it’s broken into steps that have a clear purpose. Lovelace’s approach encourages you to design the shape of the solution before obsessing over details.
Clarity is a feature. Her tables and explanations weren’t decoration—they were part of the program. When future-you (or a teammate) can follow the logic quickly, the program becomes more reliable.
Iteration is a tool, not a trick. Repetition (loops) is how you scale a method. The key is to define what repeats, what changes each time, and when it stops.
Abstraction enables reuse. If a sequence of steps works once, you should be able to reuse it with different inputs. That’s the seed of functions, modules, and libraries.
If you’ve ever used a “build it by describing it” workflow—writing requirements, iterating on a plan, then generating working software—you’ve already reenacted the spirit of Lovelace’s Notes: make the procedure explicit, keep state clear, and document assumptions so execution is repeatable.
That’s one reason vibe-coding platforms like Koder.ai fit naturally into this story. Koder.ai lets you create web, backend, and mobile applications through a chat interface, but the same fundamentals apply: you still get better results when you specify inputs/outputs, name things consistently, and ask for step-by-step structure (planning mode can help you lock down the “Notes” before you generate or change code). The tooling is new; the discipline is not.
Use this quick pass before you start coding—or when you’re debugging something that feels messy:
If you want to strengthen the “notes-first” style of program design, these will help:
Taken together, these habits turn programming from “make it work” into “make it understandable”—the same shift Lovelace’s Notes were already pointing toward.
Ada Lovelace’s “first algorithm” is a step-by-step procedure (presented in her Notes) intended to be executed by Charles Babbage’s Analytical Engine. It’s famous because it treats computation as a planned sequence of operations on stored values, which closely resembles modern programming even though the machine wasn’t completed.
The post focuses on the programming ideas in Lovelace’s work—how to express a method so it’s executable, checkable, and understandable—rather than trying to reconstruct the Engine’s hardware or settle every historical dispute.
The Analytical Engine was a proposed general-purpose machine designed to:
That architecture matters because it separates hardware that executes from programs that specify steps—the same split modern computers rely on.
Bernoulli numbers are a sequence that shows up in several mathematical formulas. They’re a good demonstration problem because each new value depends on earlier ones, requiring multiple operations, intermediate storage, and repeatable steps—exactly the kind of structured work you want to test on a programmable machine.
A step table forces precision. It makes you specify:
That’s why it resembles modern pseudocode and helps others “run” the procedure without guessing.
Repetition is the early form of iteration: you define a small set of steps, change something each pass (like a counter or partial sum), and stop when a condition is met. In modern code, that maps to for/while loops with:
Because a machine can’t rely on context or memory the way humans do. Clear variable-like labels let you track:
This reduces the most common long-procedure error: mixing up similar-looking quantities.
Abstraction separates the method (the algorithm) from the mechanics (how the machine carries it out). That’s the seed of reusable components:
In modern terms, that’s how functions and modules make systems scalable.
The label is debated because:
A safe takeaway is that her Notes clearly articulate what programming is: writing an unambiguous procedure a machine could follow.
Use a quick design pass before coding:
For related guides, see /blog/how-to-write-better-requirements and /blog/pseudocode-examples.