It's the Unicode. The stuff coming out of sed is Unicode without the 2-byte prefix that PowerShell uses to differentiate between Unicode and ASCII. So PowerShell thinks that it's ASCII and leaves the \0 bytes (the upper bytes from 2-byte Unicode characters) in, which display as blanks. And since internally PowerShell deals in Unicode, it actually expands every original byte into a 2-byte Unicode character. The is no way to force PowerShell into accepting Unicode. The possible ways around it are:
Is Unicode coming as input into SED? Unlikely but I think possible. Check that.
Make the output of SED start with the Unicode indicator, \uFEFF. This is probably what got missed in the SED source code:
_setmode(_fileno(stdout), _O_WTEXT); // probably present and makes it send Unicode wprintf(L"\uFEFF"); // probably missing
You can add the code inside the SED command, something like
sed "1s/^/\xFF\xFE/;..." # won't work if SED produces Unicode but would work it SED passes Unicode through from its input sed "1s/^/\uFEFF/;..." # use if SED produces Unicode itself, hopefully SED supports \u
Write the output of sed into a file and then read with Get-Content -Encoding Unicode. Note that the switch to file must be done in the command inside cmd.exe, like:
cmd /c "sed ... >file"
If you just let >file be handled in PowerShell, it will be messed up in the same way.
Drop the \0 characters from the resulting text in PowerShell. This doesn't work well with the international characters that create the Unicode bytes containing code 0xA or 0xD - you end up with the line splits instead of them.