Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Global palettes bugs?
#1
Hi!

I think I found a weird bug with the global palettes.

I created 8 global palettes of size 16 and loaded the Sonic foreground tilemap. Normally, everything should be black, but here, I have some blue stuff :
When I set the size to 22, I don't have any issues.

[Image: 4aee05b80bde0edcbfa9a01443aaadf9-Full.webp]

Is that normal?
Edit : It would be a great idea if the user can disable the global palettes usage, or enable them again.
Edit 2 : Is it possible to set NULL to a Layer's tilemap / bitmap / objectlist?
Reply
#2
Hi!

The foreground tileset palette is using 22 colors, so that's not a bug.

Global palettes are dynamic, they aren't used unles they're configured. Can you expand on your idea / case of use?

In general is not a good idea to set NULL to required data un use. Have you tried it? What's the purpose of it?

Regards,
Reply
#3
(05-30-2024, 05:37 AM)megamarc Wrote: Hi!

The foreground tileset palette is using 22 colors, so that's not a bug.

Global palettes are dynamic, they aren't used unles they're configured. Can you expand on your idea / case of use?

In general is not a good idea to set NULL to required data un use. Have you tried it? What's the purpose of it?

Regards,

Hi!

I'm trying to make a framework in Nim around Tilengine, which is not an easy task because I try to make it as safe as possible. To ensure that, I had to put restrictions in place. For exemple, a palette cannot belong to 2 bitmaps at the same time, to avoid double-free bugs.

I also have suggestions. Tilemaps can use the global palettes but not bitmaps nor sprites. I think it can be an interesting feature.

I find that by using the global palettes, there is less chances to introduce bugs (Double frees, uses after free, leaks, ...). It is also how it works in old consoles. The palettes are centralized in Color RAM.
In case you implement global palettes for sprites, I suggest to have more global palettes for sprites (like palettes 8 to 15 on SNES. NES, PC-Engine and GameBoy Advance have a separated palette for sprites).

What do you think about this idea?

Edit : About the global palettes "bug", so this is normal I can see a bit of blue?
Reply
#4
Hi!

Nice suggestions as always. I'll comment a bit

By design I didn't want Tilengine to be a 1:1 replica of past systems, imposing artificial restrictions. One of the major differencies is palette management. Classic systems had limited color capabilities, with prefixed slots of CRAM as you say. Graphic designers had to be very aware of color restrictions, and export their hand-crafted palettes to be loaded into CRAM separately.

Current graphics assets (bitmaps, png...) all have built-in palettes that are imported directly in a free-form fashion. Of course graphic designers could craft small palettes and use them on all their graphics, but they don't work this way anymore. They author the bitmap and have the palette embedded on it, just to be loaded by their framework of choice. So I don't think the old way of manage color would fit in current workflows.

I wouldn't partition global palettes in "tile" and "sprite" pools, that was just a chip limitation. I would offer a larger global palette pool, and let any element use any global palette. Of course you coluld arrange your palettes in any layout of your choice, but I won't make the engine to impose an artificial restriction to force you to work on a specific manner.

Sprites and tile-based layers don't own palettes, they just contain references, but they don't create nor destroy palettes. Spritesets and Tilesets are the ones who own the palettes. I could implement a TLN_SetSpriteGlobalPalette() function to make a sprite use a global palette instead of its referenced palette, but I don't think it's interesting because the sprite already has a valid palette obtained from its spriteset.

I think new features should be implemented to provide added value for artists and programmers in their workflow, but I think the features you're suggesting are more geared towards trying to protect the system from breaking it on purpose -like deallocating resources being in use- and force old limitations. This happens with any graphics (or whatever) system library writen on an unmanaged language. You can easily break wingdi, libcurl, expat, sdl or any other well established library if you intend to do so.

Don't get me wrong, I really appreciate your suggestions and take them into account -in fact I've already implemented many of them-, but this time I feel they don't align well with my vision.

However, if you can provide a use case where global palettes on sprites would be more suitable/capable to achieve a particular effect than spriteset palettes (and not just to try to make it more difficult to deliberatelly break it) I'll be glad to implement it.

On the "blueish" color palette, the renderer is reading memory past the data of the palette itself, so you're getting random data. You colud be getting segmentation fault errors, too. For performance reasons the renderer doesn't compare palette bounds on each pixel, it assumes you provided a suitable palette. By using undersized palettes you're working outside of prerequisites, so unwanted effects are not a bug, just undefined behavior.

Best regards,
Reply
#5
(05-31-2024, 07:15 AM)megamarc Wrote: Hi!

Nice suggestions as always. I'll comment a bit

By design I didn't want Tilengine to be a 1:1 replica of past systems, imposing artificial restrictions. One of the major differencies is palette management. Classic systems had limited color capabilities, with prefixed slots of CRAM as you say. Graphic designers had to be very aware of color restrictions, and export their hand-crafted palettes to be loaded into CRAM separately.

Current graphics assets (bitmaps, png...) all have built-in palettes that are imported directly in a free-form fashion. Of course graphic designers could craft small palettes and use them on all their graphics, but they don't work this way anymore. They author the bitmap and have the palette embedded on it, just to be loaded by their framework of choice. So I don't think the old way of manage color would fit in current workflows.

I wouldn't partition global palettes in "tile" and "sprite" pools, that was just a chip limitation. I would offer a larger global palette pool, and let any element use any global palette. Of course you coluld arrange your palettes in any layout of your choice, but I won't make the engine to impose an artificial restriction to force you to work on a specific manner.

Sprites and tile-based layers don't own palettes, they just contain references, but they don't create nor destroy palettes. Spritesets and Tilesets are the ones who own the palettes. I could implement a TLN_SetSpriteGlobalPalette() function to make a sprite use a global palette instead of its referenced palette, but I don't think it's interesting because the sprite already has a valid palette obtained from its spriteset.

I think new features should be implemented to provide added value for artists and programmers in their workflow, but I think the features you're suggesting are more geared towards trying to protect the system from breaking it on purpose -like deallocating resources being in use- and force old limitations. This happens with any graphics (or whatever) system library writen on an unmanaged language. You can easily break wingdi, libcurl, expat, sdl or any other well established library if you intend to do so.

Don't get me wrong, I really appreciate your suggestions and take them into account -in fact I've already implemented many of them-, but this time I feel they don't align well with my vision.

However, if you can provide a use case where global palettes on sprites would be more suitable/capable to achieve a particular effect than spriteset palettes (and not just to try to make it more difficult to deliberatelly break it) I'll be glad to implement it.

On the "blueish" color palette, the renderer is reading memory past the data of the palette itself, so you're getting random data. You colud be getting segmentation fault errors, too. For performance reasons the renderer doesn't compare palette bounds on each pixel, it assumes you provided a suitable palette. By using undersized palettes you're working outside of prerequisites, so unwanted effects are not a bug, just undefined behavior.

Best regards,

For the global palettes story, I don't have any other use case beside of security (for now at least), but the reason I use global palettes is because of... Palettes animations
For exemple, you load a bitmap, and set an animation to its palette. Later, you free the bitmap, but the palette is also freed. The animation continues to access this palette and now you have a typical use after free bug. Global palettes are never deleted unless you explicitely ask for it. They are also always here and you always know where they are.

The way I achieve palette management for a bitmap is the following :
When assigning a bitmap to a layer :
1) delete the bitmap's internal palette and assign global palette 0 to the bitmap (the global palettes are also stored in a list).
2) When I delete the bitmap from memory, I create a dummy palette (of length 1) to the bitmap. After that, I delete the bitmap, so the global palette isn't free'd.

Nothing to do for the tilemaps, since they will prioritize global palettes usage if at least one is set.
For sprites, I can just set a global palette in my list to this sprite. The engine will prioritize this palette over the spriteset palette.

For the out of bounds undefined behavior, I have an idea to avoid a potential segfault. you can do something like this :
palette[bitmapColorIndex * (bitmapColorIndex >= paletteLength)].
If the (bitmapColorIndex >= paletteLength) test fails, it will result to a multiplication by 0, so you will use the very first index of the palette.
I think even palette[bitmapColorIndex & !!(bitmapColorIndex >= paletteLength)] can work too.
What I can do in my framework too is sanitizing the bitmaps the user loads. I iterate through each pixel and if the value is superior or equals to the global palettes length, I zero it (or I set it to the highest possible index).

Edit. What I did in my framework is quite interesting. There are ownership validations before assignation, and the resources are garbage collected.
Reply
#6
"On the "blueish" color palette, the renderer is reading memory past the data of the palette itself, so you're getting random data. You colud be getting segmentation fault errors, too. For performance reasons the renderer doesn't compare palette bounds on each pixel, it assumes you provided a suitable palette. By using undersized palettes you're working outside of prerequisites, so unwanted effects are not a bug, just undefined behavior."

Hi! I think I have an idea for this problem.

No if-statements are necessary. I think you can just do something like myPalette[paletteIndex & (255 + (paletteIndex >= myPalette.getLength()))]

Explanation : We do a binary and between our index and (255 + (paletteIndex >= myPalette.getLength())
But if paletteIndex is equals or greater to the length of the palette, this expression will add 1 to 255. Supposing 255 is an unsigned char, it will overflow, causing to do paletteIndex & 0 which will result in 0. You'll then pick the 0th index.

If the expression is false, you will do paletteIndex & 255, which will give you the current palette index.

What do you think about this idea? Would it tax the performances a lot? Since it's branchless programming, I don't think it will hurt the branch prediction mechanism but I may be wrong.


There are some other solutions the programmer can do such as :
Sanitize the graphics, by that, I do mean "clipping each index under the palette length"
Or always using 256 colors palettes.
Reply
#7
Hi!

Sorry for the delay. I'm fixing some issues I found in another areas so I haven't checked the forums in a while.

Your suggestion is welcomed as always.

However, the color extraction to pixel is the inner part of the loop, where most of the CPU time happens, so it must be as tight as possible. Adding all these extra per-pixel operations would  cause major performance hit. And there's also a comparison in your solution. Intel x86 architecture has the CMOV opcode (conditional move). The compiler must use it when translating the conditional assignment, and in some cases it can have better performance than branched move. But other architectures like ARM don't have this opcode, so the conditional move is translated to a branched move.

Sanitizing the tileset data based on palette size won't work either, as you can reassign palettes on the fly. You can even modify tileset data on the fly, so assuming everything will remain static after initial load, is not realistic.

Probably the best option is allocating always 256 colors for each palette, so the indirection will never fail. It's a bit of wasted space, but so small compared to current memory availability, that I think this is the best tradeoff to protect bad authored assets while keeping performance.
Reply


Forum Jump:


Users browsing this thread: 5 Guest(s)