By replacing function names with near random names, then changing the calling patterns for the functions and pre-compressing the file ("shrinking") you are effectively removing all the repetitive blocks that a compression algorithm can see and use to compress the file further.
With no repetitive blocks in the data stream there is nothing a compression algorithm can do to remove duplicate blocks, but the reason you are seeing an increase in file size is because the compression method has its own overheads it puts into an output file. The compressor will have a dictionary of "phrases" that are duplicated within the file and then the compressed file essentially is a list of lookups to that dictionary, but with no duplicates in the file the dictionary becomes a copy of the original file and the lookups are still there with each one pointing to only one item in the dictionary.
What this means is that for an already compressed file there is no way that the output file can be smaller than the original file. You will always end up storing what is essentially the entire file along with a lookup table telling the decompression algorithm how to rebuild the original file.