

The default Commodore math libraries on the Amiga weren't the fastest, but if you don't count transcendental instructions on 040/060 they were nowhere near 2.5x slower than later hand-coded libraries. So theoretically you could use your math libs on a real Mac and still see the gains? If so jeez did Apple drop the ball.
#Fusion mac emulator amiga code
It just makes way more sense by the time System 7 was a thing to have an MMU do the heavy lifting of address translation and let the code think it's directly accessing. What's the point of doing this in a single-tasking OS? Did it work this way pre-Multifinder? I guess if you were determined to have System 7 run on a non-MMU system it's one way to enhance stability, but I really have to wonder what was going through the engineers heads at Apple. There are some exceptions where you can lock the physical memory allocated (system resources do this). That handle has to be used every time you go access even your own allocated memory, not to mention your own code in the case of self-modifying code (which is a no-no on the Mac). Instead, you get a handle (which is a pointer to a pointer). This is why you don't get a physical memory address when you allocate memory on a Mac. Apple's memory manager moves code around all of the time as it needs to. I have not heard or seen anything about these being an issue though.

I have no idea if Cyberpatcher/Oxypatcher works because these were written long after FUSION was released. In those cases you can replace the traps (as they occur in real time) with direct jumps (which typically requires copying the trap and the next instruction to be able to replace the trap with a jump address). Do the Apple math libraries just handle transcendentals badly on 040/060? Do they just rely on exception trapping? What about running on a system that also has Cyberpatcher/Oxypatcher that sticks a JIT cached version into the code after the first trap?
