Age | Commit message (Collapse) | Author |
|
Change-Id: I0710d80e34bcad1d4b1406731f2d790a0e6972f4
|
|
Node::find would walk the tree all the way down and to the right if
given an out of bounds index. This doesn't affect the return value
(None), but it does waste some cycles walking down all the way to
the deepest rightmost leaf.
Change-Id: Ic2f72aa96291a25819fc6c3c2f060fe0182a7663
|
|
Change-Id: I319a2de0da0ff71f0f337e5a17ef199f23254b11
|
|
A single linear array does everything we need here, since we don't
actually use the cheap writes the BTreeMaps would permit us, and we
save ourselves the hassle of maintaining two parallel lookup structures.
Change-Id: I418b0aaa7a3191fab3195f36f2c68ac0f5f0382b
|
|
Change-Id: I8a928b57ecc81bea31d757e73b9ece9474628db4
|
|
This makes the filesystem more like eg /nix/store.
~/src/ripple> ./target/release/add fossil
puma5z7rnb4tmnqk8ixgryobay9ifg8txh69635snkgx8dis6quo
~/src/ripple> ./target/release/mount &
~/src/ripple> ls mnt/puma5z7rnb4tmnqk8ixgryobay9ifg8txh69635snkgx8dis6quo
benches build.rs Cargo.toml src
Change-Id: Ic35f81ffec521f49ce2e4a414919e1ff717d7041
|
|
Essentially, memtree::Node becomes more of a NodeRef, and Node gets
renamed to NodeBuf. This permits calling Node::find with an arbitrary
owned Directory, without having to move it into the enum.
Change-Id: I93838932a00f2e2073e3c7ddf7ce8d302ed4ed59
|
|
This replaces the tuples with a DirectoryEntry struct.
Change-Id: I42a49fee03f7abfac9143c48106ebeb964814ca9
|
|
Change-Id: I2c9b2a15ac066ec2295d54665afd301f396efdc1
|
|
Previously, the CLI took Directory protobufs as input or wrote them as
output. Now we just deal in store hashes.
Change-Id: I5e0f0f33929ede43d971080c33bdb865f1832b2e
|
|
These decode digests to and from zbase32 for user-facing uses.
Change-Id: Ibd2db960044a97812d18d1a3c107521d78bd7f24
|
|
stdlib code seems to place these before the blocks,
so let's copy their style.
Change-Id: Ic77ed43bc8c6807c5ddb126e624f263f8bca5b66
|
|
This changes it from building an implicit top-level directory
containing all its args, to simply accepting a directory.
Change-Id: Iaf00e07d8568367b9eb27f365e8a2eaac3576974
|
|
Change-Id: I8e976279bd7aaaaf325129dc5c68a6ca5c750dc6
|
|
`add` takes about 10 seconds to run on a full LLVM tree, unless it
were to spend 4 minutes mostly waiting for a series of tiny fsyncs.
It did.
Change-Id: I492604bae68e3472f8626a112a33d023947e0e86
|
|
Change-Id: Ic410619a6115a7059b79593c6fade38236d4e8c1
|
|
This adds clap to all our binaries. Only add currently takes any args,
but previously, the others did not reject args as they should.
Change-Id: I6257fb3b218c624ee0124f6ed7313a579db88c4c
|
|
This drops the manual `len <= MIN_CHUNK_SIZE` check, and instead
combines it into acquiring the to-be-scanned chunk.
The pointer-based design doesn't need the iterator to be enumerated
from the start of the buffer, so we don't need to use take/skip.
Throughput improves about 5%.
Change-Id: Ic430c7afde68bf1acfba1a2137a0b8ac064176ea
|
|
While `const fn` isn't permitted float computation, regular `const` is.
This deals with LLVM's reluctance to inline discriminator_from_average,
without forcing us to hardcode a magic number.
Change-Id: Ibdbfa4c733468517a2feff1ec0deedd1d9b70d47
|
|
We already check for `self.buffer.len() <= MIN_CHUNK_SIZE`, but LLVM
doesn't seem to notice. This boosts throughput by 35%.
Change-Id: I1a0e07d276dcc285f8dec3149a629cb6e865c286
|
|
Change-Id: I4fed55703cd02833f377ed0bbc659f3fcfdb949f
|
|
This improves performance by ~12%.
Change-Id: I5612b624da77b138fcfb44cbb439b0106580ed70
|
|
This improves performance by ~17%. I had *expected* that rustc would
have reduce it to a constant already, but alas.
Change-Id: I5c15fe90244da64498d2d6562262db58242ffb24
|
|
Performance hovers around 300MiB/s on my machine.
Change-Id: I387ccbf065c0b667824ede0675e6a295722f6d4b
|
|
Change-Id: I7f7c5556dda64f0055f1b6d2da37c36b5c684092
|
|
Change-Id: Ib5a0bc2fb5b725dfe1f7f4557838529711407203
|
|
Full test coverage for fossil/chunker! :)
Change-Id: I0436a266220bbed6d85c291dcca827d1770294dd
|
|
We never actually use this directly, and the resulting branch is test
coverage noise.
Change-Id: Id32b056ca0cd57965d829085d768012e5a9e05ce
|
|
Full test coverage for Chunker::next!
Change-Id: I4f3dbad7e0a56f46d5714e0dd8e07f00ce255928
|
|
Free test coverage win! :)
Change-Id: I9bab30e0f0da2810c770cbd8ba5603f0eb2b28e7
|
|
Change-Id: Ia5adb5a9056fd0e9ddcd8667c56129219b9d6f52
|
|
This ensures that MIN_CHUNK_SIZE-sized chunks can actually be emitted,
and adds tests for both MIN_CHUNK_SIZE and MAX_CHUNK_SIZE chunks.
The behaviour for all cases now verifiably matches casync.
Change-Id: Ie0bfaf50ec02658069da83ebb30210e6e1963de6
|
|
This takes up 80% less vertical space for something that isn't readable
to humans to begin with.
Change-Id: I04aa27755f0b8d6cdaa83d766f8bf0ecbe3b7a46
|
|
Change-Id: I7eb02482772f48ca9f486f514b89652a9c5730cd
|
|
sled doesn't actually promise not to eat your data until you invoke
flush. This is observable under normal circumstances as add_path
occasionally just not committing anything to the store.
While we're at it, ensure we're syncing the chunks file data to disk,
so the database *might* actually be consistent. We're not going for
full crash-correctness yet, mostly for performance and complexity
reasons.
Change-Id: I6cc69a013dc18cd50df26e73801b185de596565c
|
|
This implements casync-style content-defined chunking and deduplication.
Change-Id: I42a9b98e1140bed462a5ae1e0aba508bebc9fa0e
|
|
FUSE doesn't actually respect the usual read() contract, so this
resulted in us serving truncated files.
Change-Id: I8bdb0bd7f03162fb78774f3f84daeefc5ba5e3b1
|
|
Change-Id: I1c4a5d16cf10c464f9835c961481da221aa0d12e
|
|
Change-Id: I358a5c354c19cc8cb0a75629758fda476629406d
|
|
Change-Id: Iae189c3107a6841bcbdd75bb57dde785f9548130
|
|
Change-Id: I69ae53824e149133aa6bb61dda201f972c840b1f
|
|
One step closer to genuine incremental blob reads.
Change-Id: I796710820c1b69baad91a6dc65f9d7f0dee311d3
|
|
Change-Id: I12c590d842471bf543f16fd21056224d8a7c0857
|
|
This will primarily allow us to amortise metadata lookups.
Change-Id: Ic92781bf1ded5af62f6e955322bb89623afb2061
|
|
Change-Id: I625530fe2f4db89be5889e46f0a5ed50727c8cd1
|
|
The sled chunk store works fine, but has pretty awful performance,
since sled just doesn't make a particularly good large blob store.
This change replaces it with a file-based store using sled purely as a
metadata store.
Adding LLVM's source tree takes about 120s before, and about 15s after.
Change-Id: I5fb22ea79a006fa6bcf5351921038f57f2484112
|
|
Change-Id: I4a94b84ef456b427422757a899fdce6198fd01a1
|
|
Change-Id: I77ace8ee9f69ccb92afaa0a41d69538d28f11583
|
|
Change-Id: I75f2e0ff57e09b026fd1aaaeb86b041ddb8238f4
|
|
This implements blob reading in terms of RawBlob, a fairly naive
streaming blob reader. For now, we still only use it for simple
one-shot reads.
Change-Id: Iecd4f926412b474ca6f3dde8c6055c0c3781301f
|