1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
|
title: Ripple
# A build system for the next decade
Ripple is an experimental build system. Blending functional reactive
programming, process tracing, automatic memoization, and a smattering of
object capabilities, it aims to completely redefine the state of the art
in software assembly.
Ripple will provide completely [reproducible builds](https://reproducible-builds.org/), guaranteeing that
any piece of software which builds successfully on one computer can be
recreated bit-for-bit identically on any other computer — even ten years
down the line. This allows users of binary packages to be confident that
they're running software directly corresponding to published source
code.
While reproducible builds are nothing new, combining them with automatic
fine-grained incremental compilation is necessary to make them usable
every step of the way, as opposed to something only performed at
software release time. By doing so, Ripple will banish invocations of
`make clean` to antiquity, and save billions of hours of CPU time wasted
by CI systems pointlessly rebuilding the same thing over and over again.
In addition, Ripple will take care of some other tricky problems, like
coherently expressing precisely versioned dependencies across language
ecosystem boundaries, correct cross-compilation support, and precise
runtime/build-time dependency calculation.
## Doesn't this exist already?
No. I've looked.
## Why doesn't this exist already?
While I can't answer this for sure, I would posit the following: build
systems and package managers are [infrastructure](https://www.destroyallsoftware.com/talks/a-whole-new-world).
They cannot be marketed (who would pay for a build system in this day
and age?) and need to be rethought from the ground up, so the design and
implementation of such is relegated to a select few: academics, bored
nerds, and very large companies with insane scaling requirements.
Academic research projects are usually left to gather dust once the
thesis is said and done. Nerds build toy clones of existing software,
except it's going to be better this time. Megacorps don't need their
build system to handle reproducibility — they just vendor everything
into their humongous monorepo and call it a day.
## Why don't you just ...?
Given that existing package managers and build systems overlap in scope
with Ripple, this is a perfectly reasonable question to ask. Why not
just combine [two](https://github.com/NixOS/nix) [existing](https://bazel.build/) tools which each hold half of the solution?
On a surface level, this seems perfectly innocent: reducing the amount
of work that has to be put in and avoiding fracturing ecosystems is
surely a good thing! Digging deeper, however, you'll quickly realise
the following: in order for each side to function efficiently in this
arrangement, they need some fairly invasive modifications to integrate
with each other. Hard fork territory.
So, attempting to reuse existing software would require some fairly
significant effort nonetheless, and also involve taking on legacy from
the get-go. On the other hand, designing a new system from the ground
up will be vastly simpler, as we can entirely do away with the build
system/package manager dichotomy and make entire swaths of cross-cutting
concerns disappear into thin air.
## Why doesn't *this* exist already?
Designing and implementing a project of this scope and complexity is
going to require a significant investment of time and energy. It's not
something that will get to a usable state in the near future if I'm just
tinkering on it at the weekend, so I'm currently seeking funding that
would allow me to work on it full-time. Watch this space!
|