Color + Alpha is killing me.... (Language Help)

Hi,

I'm having great difficulty with the new color setup....

I use pixel(mousex,mousey) to get a color from the canvas.
This color is now returned as the value of a 4-byte construct, with the bytes denoting A,R,G and B

This 4-byte value has the most meaningfull byte (the Alpha) set to 255 (FF). Unfortnately, this also happens to turn the number negative.

I have a need to change the alpha of the returned color.
"COLOR RGB(R, G, B, A)" would allow me to play with the A(lpha) of the color, but only if I can extract the exact red, green, and blue components from the value returned by Pixel()

... therein lies the rub.

I've been trying for two days to write simple functions (getred, getgreen, getblue & getalpha) wich should return the correct color value (between 0 and 255). These work in a lot of cases, but not always. (like I already said, I'm not a coder.)

I went through the documentation. In the Color Documentation (http://doc.basic256.org/doku.php?id=en:color ) there are some errors. The formula used to get the value of of a ARGB set is of course not ((a*256+r)*256+b)*256+g (ARBG) but ((a*256+r)*256+g)*256+b (ARGB)
Also the white is said to be -460522 but should probably be -1 (-460522 almost full yellow) although I don't understand why (all binary ones, the first one is a minus but how do the others 31 bits turn into value 1? )

Anyone up to the task? I'll muddle on trying to get to grips with it but if someone already did something like this....

Color + Alpha is killing me....

I have been looking at this for several hours. The problem is that BASIC256 has two types of numeric operations for the operators =-/*\%^. If both are integer "Signed: From −2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1" (http://en.wikipedia.org/wiki/Integer_%28computer_science%29) math is done with the result being a signed integer in the same range. This was done for speed but it is causing problems when working with large integer products (like ARGB).

All values stored in variables are converted to 8 byte floating point values with the range +/- 1.7e +/- 308 with ~15 digits of precision. The program runtime stack can handle both data types and converts from one type to another as needed.

This is why it does not make any sense. I need to experiment with re-writing the math to always work with floating point numbers (a bit slower but it will make life consistent).

May take a few days. Hopefully one of the last big changes to make as we move to 1.0 and I get seriously working on a second edition of my book. :)

Color + Alpha is killing me....

Well, if the issue only pops up when using crazy big 'numbers' that are in fact more like a hexadecimal array (ie the 4 byte ARGB), maybe it would be simpler to allow easier base-2 and base-16 number handling. That way, you could let Pixel() return a eg. 0xff00ff00 for green, which is a lot easier to 'read' than -16711936. Of course, something like a shift-operand ( a la >> ) and simple bin2hex / dec2hex / bin2dec ... functions would be needed additions.... Just thinking ( hey, it would certainly solve my problem....)
Anyway, thanks as always for your prompt reactions to the issues I raise.

Kind regards,
Uglymike

Color + Alpha is killing me....

Well, I more or less cracked it after reading about two's complement encoding for signed integers.
I have now pretty convoluted code that will decompose a numeric color (given by the pixel-command) into its composite color elements ARGB...
I first turn the absolute number (so no negative sign) into a binary string, flip the bits (1 becomes 0 and 0 becomes 1), add 1 to it and voila!, a binary string containing the ARGB. Then it is only an issue of cutting it up in byte-sized pieces, manipulate them (by eg changing the alpha) and reconstituting the numeral (negative or not) with the rgb()-command
The following program takes the colors stored in the col array, prints out a square with it, decomposes it into the ARGB elements and reconstructs the original number to draw it a second time.

#color value decomposition and recreation
#*****************************************
clg
cls
graphsize 300,300
fastgraphics
dim col(20)
col={-16777216,-1,-65536,-8388608,-16711936,-16744448,-16776961,-16777088, -16711681,-16744320,-65281, -8388480,-256,-8355840,-39424, -5623040, -5987164,-8355712}

for i = 0 to 16 step 2
colorid=col[i]
color colorid
rect (i*10),20,20,20
newcolor=twocomp(colorid)/2
coloridb = blauw(newcolor)
coloridg = groen(newcolor)
coloridr = rood(newcolor)
colorida= alpha(newcolor)
color rgb(coloridr, coloridg, coloridb, colorida)
rect ((i)*10),100,20,20
refresh
input z
next i
for i = 1 to 17 step 2
colorid=col[i]
cls
newcolor=twocomp(colorid)/2
coloridb = blauw(newcolor)
coloridg = groen(newcolor)
coloridr = rood(newcolor)
colorida= alpha(newcolor)
color colorid
rect ((i-1)*10),40,20,20
color rgb(coloridr, coloridg, coloridb, colorida)
rect ((i-1)*10),120,20,20
refresh
input z
refresh
next i
end

function twocomp(hexcol)
s\$=""
hexcol=abs(hexcol)
for i = 0 to 31
r = hexcol % 2
r\$ = string(r)
s\$ = r\$ + s\$
hexcol = int(hexcol \ 2)
next i
t\$=""
for i = 1 to 32
if mid(s\$,i,1)="0" then
t\$=t\$+1
else
t\$=t\$+0
endif
next i
carry\$="1"
u\$=""
for i=32 to 1 step -1
if mid(t\$,i,1)="0" and carry\$="1" then
u\$="1"+u\$
carry\$="0"
else
if mid(t\$,i,1)="1" and carry\$="1" then
u\$="0"+u\$
carry\$="1"
else
if mid(t\$,i,1)="0" and carry\$="0" then
u\$="0"+u\$
else
if mid(t\$,i,1)="1" and carry\$="0" then
u\$="1"+u\$
endif
endif
endif
endif
next i
cls
print "binary reversed plus 1 ="+ u\$
print "press 'enter' for the next color"
total=0
for i = 32 to 1 step -1
if mid(u\$,i,1)="1" then total= total+2^(32-i+1)
next i
twocomp= total
end function

function alpha(hexcol)
if hexcol > 0 then
alpha = int(((hexcol/256)/256)/256)%256
else
alpha=0
endif
end function

function rood(hexcol)
if hexcol > 0 then
rood = int((hexcol/256)/256)%256
else
rood=0
endif
end function

function groen(hexcol)
if hexcol > 0 then
groen = int(hexcol/256)%256
else
groen=0
endif
end function

function blauw(hexco)
blauw = (hexco/256 - int(hexco/256))*256
# this should simply be hexco%256 but that gives 0....
end function

Color + Alpha is killing me....

I have been hard at work to save you from having to do that. BASIC256 treated integers as 4 byte (32 bit) signed integers and tried to keep them tat way. The 32 bit color ARGB did the whole 2s-compliment thing and what you did works (a dirty hack but a good one ).

The version 0.9.9.45 that I am testing before release includes tobinary, frombinary, tohex, fromhex, and toradix, fromradix to create a string of any base from 2-36. The other MAJOR change is that the integer handling has been removed an ALL numbers are treated as double floats at run time. Many statements require integer arguments but the decimal part, if any, will be ignored.

With all of these changes I did not expect the code you just posted to work under the new version, but it does.

I hope to have it up in the next day or so.

Color + Alpha is killing me....

Here is your program written for 0.9.9.45. Much easier without all of that binary stuff. Thinking about your program I do not think it would have worked if alpha had been less than 128.

The next release will allow you to do.

#color value decomposition and recreation
#*****************************************
if version < 90945 then
print "requires version 0.9.9.45 or better"
end
end if

clg
cls
graphsize 300,300
fastgraphics
dim col(20)
col={black,white,red,darkred,green,darkgreen,blue,darkblue, cyan,darkcyan,purple, darkpurple,yellow,darkyellow,orange, darkorange, grey,darkgrey}

for i = 0 to 16 step 2
colorid=col[i]
coloridb = blauw(colorid)
coloridg = groen(colorid)
coloridr = rood(colorid)
colorida= alpha(colorid)
color colorid
rect ((i)*10),20,20,20
print coloridr + " " + coloridg + " " + coloridb + " " + colorida
color rgb(coloridr, coloridg, coloridb, colorida)
rect ((i)*10),100,20,20
refresh
cls
print "color as binary ="+ tobinary(colorid)
input "press 'enter' for the next color", foo\$
refresh
next i

for i = 1 to 17 step 2
colorid=col[i]
color colorid
rect ((i-1)*10),40,20,20
liftcolor = pixel(((i-1)*10),40) # pick the color up from the screen
coloridb = blauw(liftcolor)
coloridg = groen(liftcolor)
coloridr = rood(liftcolor)
colorida= alpha(liftcolor)
color rgb(coloridr, coloridg, coloridb, colorida)
rect ((i-1)*10),120,20,20
refresh
cls
print "color as binary ="+ tobinary(colorid)
input "press 'enter' for the next color", foo\$
refresh
next i
end

function alpha(hexcol)
alpha = hexcol/256/256/256%256
end function

function rood(hexcol)
rood = hexcol/256/256%256
end function

function groen(hexcol)
groen = hexcol/256%256
end function

function blauw(hexco)
blauw = hexco%256
end function

Color + Alpha is killing me....

Indeed, it would not work when alpha is less than 128, but thats a result of the demo program, since all input has alpha 225
Simply checking if the colorid is less then zero and only then doing the 2s complement conversion would fix that ( up to alpha of 127, the signed integer is positive so we can immediately go to the rood(), groen() and blauw() functions.)

Color + Alpha is killing me....

The program I posted on 2/5 to this thread works great under 0.9.9.46 (the new binary release).