The AWS image creation process is failing due to an s3fs-fuse mount on Ubuntu 22.04

I’m trying to mount an S3 bucket using s3fs on an AWS WorkSpace running Ubuntu 22.04.

The mount works perfectly when I use the WorkSpace normally. However, it fails during the image creation process. I don’t see any log—possibly because the image creation process runs in a separate environment or temporary WorkSpace instance.

Here’s my setup:

echo $ACCESS_KEY:$SECRET_KEY | sudo tee /etc/passwd-s3fs > /dev/null
sudo chmod 600 /etc/passwd-s3fs
sudo touch /etc/systemd/system/s3fs-mount.service
sudo chmod 644 /etc/systemd/system/s3fs-mount.service

Contents of /etc/systemd/system/s3fs-mount.service:

[Unit]
Description=Mount S3 bucket with s3fs
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/mount-s3fs.sh
ExecStop=/bin/umount /datafolder
StandardOutput=journal
StandardError=journal
TimeoutStartSec=30
SuccessExitStatus=0 1
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
sudo touch /usr/local/bin/mount-s3fs.sh
sudo chmod 755 /usr/local/bin/mount-s3fs.sh

Contents of /usr/local/bin/mount-s3fs.sh:

#!/bin/bash

LOGFILE=/var/log/s3fs-mount.log
BUCKET_NAME=datafolder
MOUNT_POINT=/datafolder

log() {
    echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" >> $LOGFILE
}

if ! command -v s3fs >/dev/null 2>&1; then
    log "s3fs not installed."
    exit 0
fi

if mountpoint -q "$MOUNT_POINT"; then
    log "$MOUNT_POINT is already mounted."
    exit 0
fi

log "Mounting..."
mkdir -p "$MOUNT_POINT"

s3fs "$BUCKET_NAME" "$MOUNT_POINT" -o _netdev,rw,nosuid,nodev,allow_other,nonempty,default_acl=public-read,umask=0022
if [ $? -eq 0 ]; then
    log "Successfully mounted $BUCKET_NAME to $MOUNT_POINT"
    exit 0
fi

log "Failed to mount $BUCKET_NAME to $MOUNT_POINT"

Final Setup:

sudo touch /var/log/s3fs-mount.log
sudo chmod 644 /var/log/s3fs-mount.log
sudo systemctl daemon-reload
sudo systemctl enable s3fs-mount.service
sudo systemctl start s3fs-mount.service

Here’s what’s likely going wrong and how to address it.


Likely Causes of Failure During Image Creation

  1. Network stack or DNS resolution not ready
  • Even though your service waits for network-online.target, that doesn’t always guarantee fully functional external connectivity, especially in AWS’s image builder process.
  1. Missing credentials during the image creation process
  • If you’re using passwd-s3fs, those credentials must be readable during the imaging process.
  • Image creation might involve a separate root-like environment that doesn’t have access to files outside basic OS paths.
  • If you’re using IAM roles and omitted passwd-s3fs, then s3fs may not have access because instance metadata may be blocked or IAM credentials not applied in the image builder environment.
  1. FUSE might not be usable during image creation
  • AWS image creation may not permit FUSE mounts or have the necessary kernel modules or permissions at the point your service runs.
  1. No logging because the service fails too early
  • If the service fails before the log file is opened or written to, or if /var/log isn’t writable yet, you won’t see any output.

What You Can Do

1. Add verbose logging to s3fs

Change your mount-s3fs.sh to enable debugging:

s3fs "$BUCKET_NAME" "$MOUNT_POINT" -o _netdev,rw,nosuid,nodev,allow_other,nonempty,default_acl=public-read,umask=0022,dbglevel=info,curldbg

This will emit more detailed logs to stderr, which should show up in journalctl or systemctl status.

You could also redirect s3fs output directly to your log file:

s3fs "$BUCKET_NAME" "$MOUNT_POINT" ... >> $LOGFILE 2>&1

2. Confirm logging actually happens

Ensure your script logs something even at the start, e.g.,:

log "Starting mount-s3fs.sh"

Then after image creation, inspect /var/log/s3fs-mount.log inside the new instance to confirm whether the script ran at all.

3. Test FUSE availability

In the image creation environment, FUSE support may be limited or disabled. You can pre-check this by adding a check:

if [ ! -e /dev/fuse ]; then
    log "/dev/fuse does not exist. FUSE may not be available."
    exit 1
fi

Also check:

lsmod | grep fuse

If fuse isn’t loaded or available, your mount will silently fail.

4. Delay execution

It’s possible the systemd service runs too early despite network-online.target. Add a delay to see if that helps:

sleep 10

Insert this before the mount attempt in your script. For more robust solutions, you could write a systemd timer or use a loop that waits until external access is confirmed, like:

for i in {1..5}; do
    curl -s https://aws.amazon.com > /dev/null && break
    sleep 5
done

5. Use systemd debug output

Enable detailed service logs during image creation:

sudo systemctl edit s3fs-mount.service

Then add:

[Service]
Environment=SYSTEMD_LOG_LEVEL=debug

Or run journal inspection after reboot:

journalctl -u s3fs-mount.service

This may show permission errors, timeouts, or missing environment components.

6. Alternative: Mount post-boot

If s3fs mounting during image creation is unreliable or unnecessary during imaging, delay the mount until first boot of the new WorkSpace. You can do this by:

  • Using cloud-init (via a user data script)
  • Delaying the systemd mount until after login

For example, modify your service:

After=default.target

Or add:

ExecStartPre=/bin/sleep 30

This will delay execution until the system is fully online.


Summary

Your mount is likely failing during image creation because of:

  • Missing network readiness
  • Lack of credentials or access to /etc/passwd-s3fs
  • FUSE not available
  • Logs not being written due to early failure

Add verbose logging, check for FUSE and credentials, and consider deferring the mount until after the image boots for the first time. This avoids hitting the restricted image build environment altogether. Let me know if you want help adding a boot-time-only mount configuration.