building C++ Saturn games

RockinB

Established Member
I'm trying to port a CPP application to Saturn.

Before that, I've only worked with C language as far as Saturn is concerned.

There are two problems for me, a minor and a major.

So I modified the SGL files makefile and OBJECTS to compile C++ files and link them with sh-coff-g++.

There was a linking problem, so I setup Dev-CPP to build Saturn binaries - it works :D - and used it for building a little C++ test app instead.

There comes the minor problem: When calling SGL functions from within a cpp file(e.g. a file that has been compiled with sh-coff-g++), the linker says unknown reference, although sgl.a is specified.

Workaround: put all SGL function calls in C files and link those.

Already fixed, forget it:

Now the major problem is that the generated binary doesn't work. Here the errors from various emus:

GiriGir debugger 6a:

read access 22000000.L (PC = 1a6c)

undecoded read access at 8603240a.W (PC = 6012e5c)

-> strange Saturn logo flipping through the screen

Satourne:

unknown opcode while interrupting (master SH2) @0x00000002: 0x0200
 
Originally posted by Rockin'-B@Tue, 2004-12-28 @ 02:20 PM

There comes the minor problem: When calling SGL functions from within a cpp file(e.g. a file that has been compiled with sh-coff-g++), the linker says unknown reference, although sgl.a is specified.


The solution to that is simple:

Code:
#ifdef __cplusplus

extern "C" {

#endif

#include "sgl.h"

#ifdef __cplusplus

}

#endif
If you declare the functions directly in your C++ code they are expected to have C++ linkage, which you can avoid by explicitly specifying C linkage as above (the #ifdef is only needed if the same header is shared between C and C++ modules).

Do you have a linkscript and crt0 that support C++?
 
Originally posted by antime@Tue, 2004-12-28 @ 11:50 AM

If you declare the functions directly in your C++ code they are expected to have C++ linkage, which you can avoid by explicitly specifying C linkage as above (the #ifdef is only needed if the same header is shared between C and C++ modules).

Do you have a linkscript and crt0 that support C++?

[post=126616]Quoted post[/post]​


Hey, thanks for this advice!

I use the usual SGL linkscript in PROJECTS/COMMON/sl.lnk, furthermore the sglarea.o, but not cinit.o.

cinit.o is compiled as C and those cannot call C++, as far as I know, so I had to include main() from cinit.c into my main.cpp.

Anyways, the major problem is not anymore. My fault, sorry...I forgot to do slInitSystem() :blush: .

The smallest possible c++ program now works with Dev-CPP as well as with modified makefile/OBJECTS.
 
While testing the app I'm porting, a found out that it doesn't come past the initilization.

It turns out that it hangs when allocating an array abject

new UBYTE[131136];

What can be the reason for this? It's not the first usage of the new operator(2nd instead). Could it be something like out of memory?

My sl.bin already is 660kB large.
 
Quite possibly. By default, everything is put into workram-h (because it's faster), so you have less than one megabyte of space available.
 
Originally posted by antime@Tue, 2004-12-28 @ 07:39 PM

Quite possibly. By default, everything is put into workram-h (because it's faster), so you have less than one megabyte of space available.

[post=126642]Quoted post[/post]​


Yes, I have investigated this and reduced the binary size several steps. With each try, the program got further in it's execution.

But I have no clue how to deal with the memory in C++. For some strange reason, the binary is really huge(500KB, without other stuff). It doesn't seem to be debug info included. Removing the -g switch reduces sl.coff size, but sl.bin keeps it size.

How do I controll where C++ gets the memory from? Maybe I want it to have some structures in low workRAM, or even some code, but how? This is really importand, don't know how to proceed else.

BTW: how big can sl.bin be to work properly?
 
There's no debug info included in pure binaries, so that switch won't help. Try using objdump on the coff file to see what's included and what's taking space.

Space requirements is also dependent on what C++ features you use. The linker should be smart enough to only pull in the bits you actually need, but the full C++ standard library is over 900KB by itself. Disabling exceptions and RTTI will save space as will limiting template usage. Google for documents on using C++ in embedded development, they contain lots of information.

IIRC, the default allocator just uses the heap, defined by the bss_start and bss_end symbols. If you want more control over memory allocation you can write your own memory management routines (since you're using C++ you could for instance override operator new for some classes), but if you need that big chunks of dynamically allocated memory I would suggest your design has problems.
 
Originally posted by antime@Tue, 2004-12-28 @ 10:27 PM

...Disabling exceptions and RTTI will save space as will limiting template usage. Google for documents on using C++ in embedded development, they contain lots of information.

If you want more control over memory allocation you can write your own memory management routines (since you're using C++ you could for instance override operator new for some classes)...

[post=126655]Quoted post[/post]​


Oh thanks a lot, your a god :bow !

I did -fno-rtti and -fno-exceptions.

The latter involved some removing of thrown exceptions, but did not work. Maybe the new operator needs exceptions.

Furthermore, I modified the link script and makefile to use the low workRAM and produce 2 binaries(low and high work RAM). The Map file gives fine info on the placement.

Can move all .o files to low workRAM, but not the zlib stuff :huh .

Of course, I could link every single .o file of zlib, but how is it done to place all stuff of a lib into a certain SECTION?

Anyways, with the emus I cannot test the prog split into 2 binaries, because they seem to reset the workRAM to zero if I load the other binary.

Gues what? I did implement overloading of new, delete, new[] and delete[] operators. Full parameterizable global or local for every single class. And I can switch to use low workRAM with SEGA_MEM.a, or usual malloc.

This allows me to see every single mem alloc in debug mode, the requested size and given location. Man, I was very surprised how huge some class objects are! Fine to see where the alloc fails.

Update: fixed now:

But I have problems with low workRAM. I copy stuff from an object in high to low workRAM, but the writing seems to be without effect, the data written is zero :huh !!

I could only test with Satourne, so with all the unsupported features that I discovered it could be possible that it's a Satourne bug. Or are there special things to do, when using low workRAM?

Note: haven't yet tried to disable exception again, with overloaded operators.
 
Originally posted by Rockin'-B@Wed, 2004-12-29 @ 08:13 PM

Maybe the new operator needs exceptions.
Depends on your allocator. The one in libstdc++ doesn't need them (both exception-enabled and non-throwing versions are supplied).

Of course, I could link every single .o file of zlib, but how is it done to place all stuff of a lib into a certain SECTION?
Suitable wildcards in the input and output selections in the link script might do the trick, or you could assign the sections when creating the library.

Gues what? I did implement overloading of new, delete, new[] and delete[] operators. Full parameterizable global or local for every single class. And I can switch to use low workRAM with SEGA_MEM.a, or usual malloc.
You could also use your custom allocators to track memory leaks, memory profiling and other stuff. Instead of overloading the operators you could also have used "placement new", but your system allows more flexibility. I would however recommend against mixing allocators, especially SEGA_MEM and the compiler-provided ones.
 
Originally posted by antime@Wed, 2004-12-29 @ 11:24 PM

I would however recommend against mixing allocators, especially SEGA_MEM and the compiler-provided ones.

[post=126725]Quoted post[/post]​


There is no problem with that. Have now added a 3rd option: fixed memory locations for objects. There is only one instance per class, so it should work, be faster and save libs.

The access to low workRAM magically works now, as well as disabling exceptions(with my own new operators). I moved a >256KB object to low RAM and all that worked.

The app does run on Satourne, but there seems to be some other problem, since the graphics output doesn't work. Other debug text display signals correct functionality so far.

But on real Saturn, it hangs at some point.
 
Originally posted by antime@Wed, 2004-12-29 @ 11:24 PM

I would however recommend against mixing allocators, especially SEGA_MEM and the compiler-provided ones.

[post=126725]Quoted post[/post]​


:agree

Hmm, you're right, one must be very carefull. The author of the ported app mixed the pairing of the new and delete operators (a common problem, I've read in a handbook).

He allocated an array with new[] and destroyed it with delete (not delete []).

But I had overloaded new[] and delete[], so the real Saturn locked up when delete was called!

I also tried to minimize the size of the binary. Very annoyingly, the linker does link stuff that is never used. It seems that every public function of a class is linked, whether it's used or not. So I had to insert some #ifdef's.

As you said, the stc++ library is very large and the most stuff linked is from there with lots being trash I don't want to use. Streams for example. I don't really need to, but maybe I'll try to exclude those somehow, too.
 
Using new[] and delete on the same object is indeed a bug, but I was referring more to the fact that SEGA_MEM seems to keep its own list of free and allocated memory and does not use the C-library malloc/free. So if you mix the two for the same memory area you will inevitably get collisions; using them on different memory areas should be fine.
 
It's definitely worth it to look into sl.map and try to eliminate the whole bunch of trash that is linked there.

So I avoided some snprintf() and similar such that the binary decreases to only 158kByte!!! Now I can put most stuff into high work RAM and it runs faster.

BTW: we were talking about Handy, an Atari Lynx emulator. It can be expected to run at fullspeed, someday.
 
Originally posted by Rockin'-B@Thu, 2005-01-06 @ 01:18 AM

So I avoided some snprintf() and similar such that the binary decreases to only 158kByte!!!
The big hog there is the floating-point emulation. Newlib comes with an integer-only version if you really need the functionality.
 
I just tried to recompile my WonderSwan emu port with SaturnOrbit. As Dev-Cpp is linking with gcc, I can't link with g++ and have to supply the c++ std libs manually.

It's a C++ project and it worked when compiling all cpp files as C files (but only for COFF).

But when I compile them as C++ files, then the compiler complains to not find function rand(). I guess it's in the C stdlib, but not in the C++ stdlib.

Not having much experience with C++, I just tried #include<cstdlib>, but then I get a lot of strange errors.

Anyone can help me?

edit:

Forget it, it's written in some strange C dialect, no real C++ and no real C. I don't care if it can be build with ELF, as I works fine with COFF.
 
Back
Top