Notes on Interrupts, INT_SetMsk(), INT_SetFunc()

This is going to be rather long, and with many questions...

So I've been looking more into interrupts. At the moment I have a vague notion that they are automatically or manually triggered hardware events which must be dealt with by the system, which responds using interrupt functions (vectors?, still pretty new to this), and that some interrupts can be ignored or disabled using something called "masking".

Anyway! Reading the 3rd Program Library User's Guide, page 55, "Interrupt Management Library", which I assume corresponds to sega_int.h and the INT section of the SBL6.0 library.

I. On top of page 56 they discuss using the INT_SetMsk() function to enable and disable certain interrupts within the system. One of the lines they give is...

Code:
void sysInit()

{

INT_SetMsk((INT_MSK_HBLK_IN | INT_MSK_VBLK_OUT), (INT_MSK_SPR | INT_MSK_DMA1));

}

The comments for this piece of code says "Enables H-Blank-In, V-Blank-Out interrupts, Disables Sprite-Draw-Complete and Level-0-DMA Interrupts."

But! In SEGA_INT.H within the SBL6 library, INT_SetMsk() doesn't take a list of enable and disable interrupts. It only has...

#define INT_SetMsk(msk_bit)

Which definitely does not have room for two argument lists.

On the other hand, INT_ChgMsk seems to fit perfectly...

#define INT_ChgMsk(ena_msk_bit, dis_msk_bit)

So my question is, is this example on page 56 actually using INT_ChgMsk, and there was a typo? Or, is it correct and I just don't understand how listing the interrupts works? (I'm pretty sure I'm not stupid here and it's a typo.)

II. Next question! What then does INT_SetMsk actually do?

Would INT_SetMsk(INT_MSK_HBLK_IN) enable the HBlank-In interrupt, or disable it? Or would it toggle it? I really have no clue what it would do.

III. Next point! Page 57 of the Program Library User Guide 3 has the following two lines in the example at the top:

Code:
void systemInit(void)

{

INT_SetFunc(INT_SCU_VBLK_IN, vblkIn);

INT_SetSCUFunc(INT_SCU_VBLK_OUT, vblkOut);

}

Alright. The comments says "vblkIn interrupt function is set to the V-BLK-IN interrupt vector table" for the first, and "vblkOut interrupt function is set to the SCU interrupt function of the v-blk-out interrupt vector."

I interpret the first to mean "the function called 'vblkIn' has been entered in the interrupt vector table as the proper function to be called when the V-BLK-IN interrupt is triggered."

I interpret the second to mean "the function called vblkOut will be the interrupt function called by the SCU when the V-BLK-OUT interrupt is triggered [and thus, vblkOut will be entered in the SCU's interrupt vector table as the appropriate function to be called when the V-BLK-OUT interrupt is triggered]."

III-A. Are these interpretations correct?

III-B. The second example is clearly for the SCU interrupt vector table, but what about the first? The standard INT_SetFunc() seems to tell me that we are registering an interrupt function to the MAIN CPU/SH2, not the SCU, but the name we pass to SetFunc() is still INT_SCU_VBLK_IN. If this is a Main CPU interrupt and not an SCU interrupt, shouldn't we be passing the main CPU's interrupt flag and not the SCU's?

I ask this because I don't actually see a INT_CPU_VBLK_IN flag, so I'm guessing the main CPU doesn't have a V-BLK-IN interrupt. Or perhaps it does, and I misunderstand. SEGA_INT.H reveals there is a INT_ST_VBLK_IN, but I don't know what 'ST' means.

Basically, I want to know what that first INT_SetFunc on page 57 is doing -- what interrupt are we registering the function to, and to what CPU.

Sorry this is a lot of questions. I'm still unsure about this stuff. If my understanding of interrupts is lacking, feel free to just point me somewhere else :) If my question is genuinely worthwhile (ie, there's an error in the manual or something Saturn-specific) then I'd appreciate any answers you guys can give me.

Also sorry for being so inexperienced!
 
Let me answer a few of my own questions...

II. INT_SetMsk enables the mask bit for the given interrupt flag. [I _think_ I interpreted that right]. Thus, masking [disabling] that interrupt.

III-B. The SCU and main CPU interrupts must both be set by the Master CPU SH-2, so it is the processor which calls both of those functions. I'm _guessing_ that the second line is where the Master CPU sets the V-BLK-OUT interrupt function for the SCU.

I'm also guessing that the first line is where the Master CPU tells itself to call the vblkIn function when it receives the V-BLK-IN interrupt from the SCU! Is this correct? (the idea of the SCU sending a signal on v-blk-in to the Main CPU, telling it that the SCU has received a V-BLK-IN interrupt.) If so, this would explain why the Main CPU has an interrupt function set for an interrupt that is only triggered by the SCU.
 
Sorry this is a lot of questions. I'm still unsure about this stuff. If my understanding of interrupts is lacking, feel free to just point me somewhere else :) If my question is genuinely worthwhile (ie, there's an error in the manual or something Saturn-specific) then I'd appreciate any answers you guys can give me.


I'll try to explain interrupts from the point of view of programming the hardware directly, as I haven't used Sega's libraries.

The vector table is an array of 256 vectors that contain the address of subroutines which are executed when interrupts occur. They are numbered from 0 to 255. The address of this table in memory is set through the SH-2's VBR register, and usually points to RAM so the vectors can be freely modified.

When an interrupt is requested, it's associated with a vector number (which says which vector in the table is used with this particular interrupt) and a priority level. The priority level allows more important interrupts to be accepted while a less important interrupt is being currently serviced. The SH-2 can decide which priority levels are ignored using the 4-bit interrupt mask field in it's SR register. When set to $F all priority levels are disabled, and $0 all priority levels are enabled.

Now, what causes an interrupt to be triggered? There are several sources. The SCU, on-chip periperhals (DMAC, SCI), and other sources (NMI pin). The vector numbers associated with these sources are fixed in some cases, and user defined in others.

Note that some types of software activity (TRAP instruction, illegal instruction encountered, user break control) work just like interrupts, but have fixed vector numbers and have the higest priority level.

The SCU itself has a bunch of interrupt inputs that other devices (VDP1/2, SCSP) use to request an interrupt. Depending on which of these interrupt inputs are enabled, the SCU will figure out the priority and issue an interrupt request to the SH-2. If the SH-2's SR register is set up to ignore a particular priority level, the interrupt is ignored.

Page 27 of the SCU User's Manual shows the interrupt sources the SCU is connected to, their associated vector number, and the priority level (0-F) they correspond to. For example the V-Blank IN interrupt uses vector 0x40 and is priority level $F - a very high priority interrupt.

To hook the V-Blank IN interrupt there are several things you have to do:

- Set the SCU's interrupt mask register to enable V-Blank IN by writing $BFFE to $25FE00A0. Bit 0 is the mask bit for V-Blank IN (0=enabled, 1=masked)

- Set vector number 0x40 in the vector table to point to your V-Blank function.

- Set the SH-2 SR register to allow interrupt priority level $F.

Now, my guess is that:

- INT_SetMsk() sets the SCU interrupt mask register. INT_ChgMsk() works like you suspected and maybe there's a typo in the library. That would match the source code's comment.

- The SCU doesn't have any way to assign interrupt functions to it specifically, so the library has some layer of abstraction when dealing with the SH-2 vector table entries that correspond to SCU interrupts (like V-Blank IN, etc.)

Does this make any sense?
 
Alright. So I'm guessing that what INT_ChgMsk() (and in fact the rest of the interrupt section of the SBL) does is actually maintain a copy of the interrupt masks outside of the mask register itself. This has to be true because the Program Library User Manual 3 itself says so.

So when I use INT_ChgMsk, I supply my own bitfield (is that the right term?) by mixing the various flags of interrupts I wish to mask/enable by using the bitwise OR operator. ChgMsk then uses this to individually set the mask bit of each flag's interrupt mask register.

Does that sound right?

SCU sends interrupt requests to the SH-2. Sounds perfectly understandable now.

I do believe the SCU allows you to assign interrupt functions to it, as there is an actual INT_SetSCUFunc() command which is separate from the INT_SetFunc() command. Perhaps? The Program Library User Manual 3 seems to suggest that there is a difference between the two (SetSCUFunc() takes precedence over the standard INT_SetFunc(), or something).

So I'm not really sure whether the SCU has it's own interrupt vector table or whether it just passes interrupt requests to the SH-2.

Most of that did make sense though. Think I'll go check that part on page 27 in the SCU User Manual...
 
I digged in the sources a little and noticed the difference:

* INT_SetFunc(num, hdr)

Calls BIOS function SYS_SETSINT(num, hdr)

* INT_SetScuFunc(num, hdr)

Calls BIOS function SYS_SETUINT(num, hdr)

Going by the "Sega Saturn System Library User's Guide" (ST-162-R1-092994.pdf), it sounds like INT_SetFunc directly assigns a routine to the vector table, whereas INT_SetScuFunc chains the routine to some of it's SCU housekeeping code.

This is why the description warns that when using SYS_SETSINT for SCU related processing, the "SCU interrupt process routine of that vector becomes ineffective" because your code is being directly called rather than having the SCU interrupt management code being included.

Basically you'd use the first function for all non-SCU interrupts (say the divider unit, NMI, SCI) and the second function call for SCU managed interrupts. I wonder what the housekeeping code actually does.

Originally posted by Omni@Sat, 2006-05-20 @ 12:16 AM

Alright. So I'm guessing that what INT_ChgMsk() (and in fact the rest of the interrupt section of the SBL) does is actually maintain a copy of the interrupt masks outside of the mask register itself. This has to be true because the Program Library User Manual 3 itself says so.

Yeah. It calls SYS_CHGSCUIM which is documented in that guide I reference above; parameter 1 are the bits to enable and parameter 2 are the bits to disable.

So when I use INT_ChgMsk, I supply my own bitfield (is that the right term?) by mixing the various flags of interrupts I wish to mask/enable by using the bitwise OR operator. ChgMsk then uses this to individually set the mask bit of each flag's interrupt mask register.

Yes, that's correct.
 
Do you think it's required to setup interrupts using SetSCUFunc() so that the SCU's interrupt handling can be utilized? I mean, not _required_, but how would that affect their use? Though I guess you'd need to know what the SCU actually does when it receives those interrupts...

Thanks for the help, it's making more sense now.

New question, if nobody minds:

What is the difference between the two memory maps shown in the SCU User Manual on pages 20 and 22? One is called a "cache_address" map, the other "cache_through_address." It looks like the same memory map from two difference reference points, but I don't actually know what that perspective is (what's accessing the map in each case?)

Basically, what is the significance of the 'cache' on the SH-2? I know it stores things, yes, but it seems to imply that I need difference access methods when data stored in the cache suddenly becomes...outdated, for lack of a better term...when the actual data [which the cache is a copy of] changes.

In other words I have only a 35% undestanding of pages 20-22 of the SCU manual.
 
Originally posted by Omni@Sat, 2006-05-20 @ 01:17 PM

Do you think it's required to setup interrupts using SetSCUFunc() so that the SCU's interrupt handling can be utilized? I mean, not _required_, but how would that affect their use? Though I guess you'd need to know what the SCU actually does when it receives those interrupts...


It's required to use if you want the BIOS and system libraries to still work with other SCU interrupts nicely. Otherwise you have to manually add code to your interrupt handler to acknowledge the interrupt and update the corresponding BIOS variables to reflect that. Sounds messy.

The distinction IMO is that if you *really* need to hook an interrupt and not waste time doing the SCU interrupt housekeeping, you use SYS_SETSINT. This is for time critical things like using one of the timers for mid scanline split screens or something wacky like that. And SYS_SETSINT is generally used for any non-SCU interrupt too.

Otherwise, using SYS_SETUINT will let you write simple interrupt handlers and let the BIOS take care of all the other tasks associated with SCU interrupts. So I'd stick with SetSCUFunc().

What is the difference between the two memory maps shown in the SCU User Manual on pages 20 and 22? One is called a "cache_address" map, the other "cache_through_address." It looks like the same memory map from two difference reference points, but I don't actually know what that perspective is (what's accessing the map in each case?)


The cache is a small chunk of RAM in the SH-2 chip. When memory is accessed, the SH-2 checks if it's already got that data in the cache. If so, the cached memory data is used which is faster than actually doing the memory access to an external device like SDRAM. If the memory isn't cached, not only is the data requested read from memory, but a certain number of bytes are read in total (like 16 or something) under the assumption future memory accesses will be from adjacent addresses.

In the case of executing program code this is almost always true as instructions are ordered sequentially. So the cache speeds up memory accesses and increases efficiency. You can also organize code and data, nd access it in such a way to maximize cache usage for more speed.

The cache contents are only valid if they are 'in sync' with the memory that was read to load the cache in the first place. This is a problem when the memory changes without the CPU knowing. For example if the CPU caches 16 bytes of RAM from VDP2 RAM, then DMA changes VDP2 RAM, the cache has the old data and not the new data.

The cache-through-address area is a copy of the memory map where the cache is disabled; accessing memory will not load the cache or flush cache contents. You'd use this for things like the SCSP, VDP1, VDP2, SCU, and any other non-memory device. Note that all their addresses are in the cache-through-address area ($2xxxxxxx) in the manuals for this reason.

The cache-address area is where the cache is enabled. You'd use it for the BIOS ROM and work RAM; taking into account that DMA can update RAM without the cache being updated.

w it stores things, yes, but it seems to imply that I need difference access methods when data stored in the cache suddenly becomes...outdated, for lack of a better term...when the actual data [which the cache is a copy of] changes.

I can't recall exactly what conditions fill and flush cache lines on the SH-2. You won't experience this as a problem in most cases that I can think of, unless you write some bizarre self-modifying code. Plus there are library functions to invalidate the cache, useful if say you DMA a new program overlay into RAM and want to start executing it.

As a side note I'll point out you can split the 4K cache into 2K cache and 2K of general purpose RAM that is very high speed. Can be useful in some situations.

I think one of the two SH-2 manuals describe all the cache operations and mechanisms if you need nitty gritty details.
 
Alright. So the cache-address map is not actually representative of actual memory in the cache, since obviously that entire memory map is much larger than 4K.

But, the cache-address map is a region of the memory map that when hit, will access that region of the map and store the immediately surrounding data in the cache.

The cache-through map just allows direct access to the components on the memory map without writing to the cache.

So in essence, the cache-address map and cache-through map are virtual mirrors of the same Saturn memory map? (The only difference being, one map enables access with caching and the other does no caching). Is this correct?

But then: why would I ever want to risk accessing cache memory that isn't reflective of the real state of memory? Why not just do everything with cache-through, if I'm willing to take the speed hit? [Is the speed-hit the only disadvantage?]
 
So in essence, the cache-address map and cache-through map are virtual mirrors of the same Saturn memory map?

(The only difference being, one map enables access with caching and the other does no caching). Is this correct?


Yes. The SH-2 has a 32-bit address space, but physically only enough pins to access 128MB. Of the remaining bits, the top 3 select several modes of acessing memory:

0x00000000 : Cache enabled area

0x20000000 : Cache disabled area

0x40000000 : Cache purge area (used to flush cache lines specifically)

0x60000000 : Directly access the cache address storage

0xC0000000 : Directly access the cache data storage

0xE0000000 : On-chip peripherals

Typically you'll never use the three cache control areas. The remaining unaccounted address bits are ignored, though I suppose for compatability with future hardware (lol) you'd want to keep them at zero.

But then: why would I ever want to risk accessing cache memory that isn't reflective of the real state of memory? Why not just do everything with cache-through, if I'm willing to take the speed hit? [Is the speed-hit the only disadvantage?]

[post=146354]Quoted post[/post]​


Because the speed gain is quite significant. I don't have any timing data but running a routine that is mostly or entirely cached is much, much faster than executing it out of low or high work RAM. This is the exact reason why modern CPUs have such large caches, so they can access main memory as less often as possible and utilize the cache memory more.

Also I should point out that the operation of the cache is basically transparent to the user. I've written a lot of Saturn programs and have never had problems caused by the cache, or had to specifically code things to work around it or along with it.
 
While we're on the subject, I'd like to ask a question or two: I have the following code:
Code:
 .TEXT

 .GLOBAL _int_cpu_exec

_int_cpu_exec:

	MOV r4, r7 ! r7 = r4

	MOV r5, r4 ! r4 = r5

	MOV r6, r5 ! r5 = r6

	MOV.L set_int, r1

	MOV.L @r1, r1

	JSR @r1

	NOP

	MOV r7, r4

	NOT r4, r4

	MOV #0, r5

	MOV.L chg_mask, r1

	MOV.L @r1, r1

	JSR @r1

	NOP

	RTS

	NOP

 .ALIGN 4

chg_mask:

 .LONG 0x6000344

set_int:

 .LONG 0x6000300
The code works fine when I call it in C:
Code:
int_cpu_exec(0x2, 0x41, &vblank_out);
The only problem would be that anything that is called after int_cpu_exec() isn't executed... It's like as if it stays in a while loop.
 
CyberWarriorX helped me out. Apparently the JSR instruction overwrites the PR register. I needed to save the PR register to stack :blush:
 
Back
Top