I will receive some byte array for my data of x many bytes. Let's say some random chunk of data within this large byte array represents whether a light switch is on or off. So I will focus on this single byte/8-bits of data for this example. 3 bits out of these 8 bits are used to determine the on/off state of the light switch.

So, all I know about this chunk of data are these 3 fixed values: the starting byte position within this byte array, the starting bit position within this specific byte itself, and the number of bits allocated to represent the on/off state of this light switch.

So in this particular example, the byte position is byte 15 (within this large byte array of x many bytes). The starting bit position is 0. The number of bits is 3 (since 3 bits are allocated within this byte to represent on/off state).

My question is: how do I calculate the bit mask and bit shift value based on these 3 fixed values? I know the calculation might involve base 2 or raising to the power of 2, but unsure.

Thanks in advance