-
Notifications
You must be signed in to change notification settings - Fork 36
Description
I’m using your CXL-DMSim (gem5 FS) setup and trying to reproduce / extend the experiments by adding additional workloads (Redis + memtier_benchmark). My environment is a cloud server where loop mounting is not available, and the guest OS inside gem5 appears to have no network interface (only lo), so I can’t install dependencies via apt inside the guest.
Could you please share the recommended / official workflow you used to add workloads into the provided disk image and ensure they run correctly?
My setup
Host: cloud VM (cannot use loop mount)
Guest: no networking (ip a shows only lo; ping 8.8.8.8 “Network is unreachable”)
What I tried
Inject files into the ext4 partition using debugfs:
debugfs -w -R "write <host_file> /home/cxl_benchmark/" parsec_p1.img
then merge back using dd if=parsec_p1.img of=parsec.img bs=512 seek=2048 conv=notrunc
For memtier, I created a portable bundle (binary + copied shared libraries + LD_LIBRARY_PATH wrapper). This works.
For Redis, bundling /usr/bin/redis-server + copied libraries causes a segfault inside the guest (likely ABI / glibc mismatch between host and guest rootfs).
Questions
Did you build the workloads inside the guest rootfs (so they match the guest glibc), or on the host?
If on the host, did you use:
static linking (musl/static), or
a chroot/container that matches the guest OS version (to ensure ABI compatibility), or
another approach?
Do you have a “known-good” workload package / binaries (e.g., prebuilt memtier/redis, or scripts) that you used for the paper experiments?
Is guest networking intentionally disabled in your image/config? If so, is there an intended way to install packages (offline debs, chroot build, etc.)?
Any recommended settings for persistent vs non-persistent Redis during benchmarking (AOF/RDB off) and how you collected tail latency (P50/P99)?
If you have a short step-by-step reproducibility note (or the exact scripts you used to inject workloads), that would be extremely helpful.
Thanks a lot