One interesting thing I noticed: in the Saturn, you don't need to make full 3D matrix calculations and view fustrum calculations to convert a 3D vector to a 2D screen cordinate in order to draw the polygons in the Saturn. The draw command in the Saturn, at least according to the docs I've been reading lately, is independent from the 3D calculation functions. The draw command actually takes the 4 on screen points the quad must fill, and a few other arguments.
To draw polygons in a frame, you first do all your 3D math, and them do the frustum thing to get 2D coordinates, and issue the drawing routine for each polygon, telling them the 2D points on the screen. Due that, a celver programmer could use less accurate, even fake, cheap 3D calculations with some objects. Or cache 2D coordinates. Or calculating loads of coordinates, usinf offsets from a single one.
In games with a fixed camera angle, this could offer loads of speed up, if done properly.
As example, you could have a Resident Evil-style game, with fixed cameras, but real-time 3D scenery instead of pre-rendered. When a different camera is triggered, the Saturn calculates all 3D transformations to properly adjust the view, but it could then store the 2D coordinates for the scenery polygons in RAM, and stop calculating them, doing the 3D stuff only with the characters and moveable objects.
In a full moving 3D game, this could be done for polygons after a certain distance. Maybe this was one of the tricks behind that awesome Shenmue demo.