It's mostly deterministic – filesystems use various algorithms to determine the best place for new data. But it is not possible to 100% duplicate all internal state, so you have to consider that:
different filesystems (ext4, btrfs, NTFS...) use different allocation algorithms,
which can also be influenced by the program doing the writing (e.g. a file that grows to 100 MB slowly will sometimes be allocated differently from a file that's created by fallocate()'ing 100 MB at once),
as well as other programs writing to disk at the same time, since the allocation of file B will depend on whether file A was already written or not (all determinism here goes away when you have a multi-core or multi-CPU system);
size and location of existing files;
size and location of deleted files (e.g. on log-structured filesystems, the data only goes forward)
different disk types (filesystems may care much less about fragmentation when writing to solid-state disks than to magnetic disks);
physical corruption (if one sector gets corrupted, the filesystem might choose to put the entire file elsewhere instead of just skipping that one sector);
And finally, even if both example computers have 1:1 copies of raw disk contents,
- some filesystems may make random choices if that's written into the algorithm. From a quick grep, it seems that at least Ext4 uses random choice as a fallback when all choices are equal.