Trouble converting formula from Javascript to Python - javascript

I have this JS code to convert incoming bytes to a temperature. I'm trying to replicate it in Python. The main difference is in JS I was dealing with a Uint8Array. Now I've manually saved the bytes using the LightBlue iOS app. And I need to convert text to bytes to temperature.
JS:
var bufferA = new Uint8Array(dataIncoming);
var a = ((bufferA[3]) | (bufferA[4]) << 8);
var b = ((bufferA[5]) | (bufferA[6]) << 8);
var currentTempC = ((1.0 * a) + (1.0 * b))/100.0;
var currentTempF = currentTempC * 9 / 5 + 32;
Python: The number commented out (175F-168F) is the actual temperature.
bytesRaw = [
'060203671B0000', #175F
'060203991D0000', #172F
'0602031A1E0000', #171F
'060203381E0000', #169F
'060203341E0000', #168F
]
for i in range(0, len(bytesRaw)):
#Trim and convert ascii to hex
a = binascii.a2b_hex( bytesRaw[i][6:-6] )
b = binascii.a2b_hex( bytesRaw[i][8:-4] )
print(a,b)
#Convert to int?
a = int.from_bytes( a, byteorder='little')
b = int.from_bytes( b, byteorder='little')
print(a,b)
currentTempC = ((1.0 * a) + (1.0 * b))/100.0;
currentTempF = currentTempC * 9.0 / 5.0 + 32.0;
print('{0:4.3f}C'.format(currentTempC))
print('{0:4.3f}F'.format(currentTempF))
Results:
b'g' b'\x1b'
103 27
1.300C
34.340F
b'\x99' b'\x1d'
153 29
1.820C
35.276F
b'\x1a' b'\x1e'
26 30
0.560C
33.008F
b'8' b'\x1e'
56 30
0.860C
33.548F

Related

Evaluation expression:- Python vs Javascript

I'm replicating an equation in javascript in python.
Here's the equation in Javascript:-
var q = 1;
var c = [608875978, 500902236, -1359500678, -1631660920];
var x = c[q >>> 2] >>> 24 - q % 4 * 8 & 255;
Output :- x = 74
Similar in Python:-
def rshift(val, n):
return (val % 0x100000000) >> n
i = 1
words = [608875978, 500902236, -1359500678, -1631660920]
x = rshift((words[rshift(i, 2)]), 24) - i % 4 * 8 & 255
print(x)
Output :- x = 28
I'm sure rshift is returning the right value. What exactly is wrong with Python evaluation vs JS evaluation?
I just tried the same in python and it outputs 74:
q = 1;
c = [608875978, 500902236, -1359500678, -1631660920];
x = c[q >> 2] >> 24 - q % 4 * 8 & 255;
print(x) // prints `74`
Note: In python, I just replaced >>> with >>.
I'm not python expert - I just started learning from yesterday. So, I can't analyze your example code now. But you should fix - there might be some issue.
As far as I know, any language follow the same math rule.

Why isn't my negative number properly obtained in NodeJS?

I am using nodeJS to parse some HEX string, I am trying to convert the HEX value into a integer value using parseInt but I am running into some difficulties with the negative number that I don't understand the reason why.
I have the following HEX string D3FFBDFFF900 that is ecoding the following integers x:-0.45*100 y:-0.67*100 z:2.49*100 in the this way
D3FF | BDFF | F900 => -0.45*100 | -0.64*100 | 2.49*100
And I have created the following code snippet ( and I do now that the division by 100 is being missed there )
var x = "D3FFBDFFF900".substring(0,4);
var y = "D3FFBDFFF900".substring(4,8);
var z = "D3FFBDFFF900".substring(8);
console.log("x:"+x);
console.log("y:"+y);
console.log("z:"+z);
console.log("parseInt x "+parseInt(x.toString(16),16));
console.log("parseInt y "+parseInt(y.toString(16),16));
console.log("parseInt z "+parseInt(z.toString(16),16));
Why isn't parseInt been able to decode at least the values x=-45, y=-67 and z=249 and instead I have the above output?
Thanks in advance,
EDIT: the way of encoding the data is like below, where the print just print the original HEX string into a serial bus
#define NIBBLE_TO_HEX_CHAR(i) ((i <= 9) ? ('0' + i) : ('A' - 10 + i))
#define HIGH_NIBBLE(i) ((i >> 4) & 0x0F)
#define LOW_NIBBLE(i) (i & 0x0F)
for (int i = 0; i < size; ++i) {
print(static_cast<char>(NIBBLE_TO_HEX_CHAR(HIGH_NIBBLE(payload[i]))));
print(static_cast<char>(NIBBLE_TO_HEX_CHAR(LOW_NIBBLE(payload[i]))));
}
and the values x,y,z are got as below where type of accelerometer.getX() -> double
x = (int16_t)(accelerometer.getX()*100)
y = (int16_t)(accelerometer.getX()*100)
z = (int16_t)(accelerometer.getX()*100)
How should the parser know, that you swapped the nibbles and use hex with 4 digits?
0xD3FF = 15 * 1 + 15 * 16 + 3 * 256 + 13 * 4096 = 54271
-45 = -0x2D
-67 = -0x43
249 = 0xF9
The parser does a correct job.
To parse the received hex values you have to swap the high and low nibbles:
D3FF => FFD3
Next you have to parse the hex to dec. If your value >= 0x8000 you have to invert the binary representation and add 1
0xFFD3 = 65491 > 0x8000 = 32768
-(~65491 & 0xFFFF) + 1 = -43

From hex to float - Javascript [duplicate]

This question already has an answer here:
Converting hexadecimal to float in javascript
6 answers
I am trying to convert a hex string to a float number in Javascript.
Suppose that I have the hex string "0082d241". Using this online converter and selecting Swap endianness, the correct float value is 26,3135.
I know that this is the correct answer because it is from a TMP36 sensor.
I have tried some other examples that I found here on SO, such as Converting hexadecimal to float in javascript, but none of them worked.
The first step is to swap endianness, using the source code found on the page that you've shown.
Then you can convert the hexadecimal string to a float value.
function flipHexString(hexValue, hexDigits) {
var h = hexValue.substr(0, 2);
for (var i = 0; i < hexDigits; ++i) {
h += hexValue.substr(2 + (hexDigits - 1 - i) * 2, 2);
}
return h;
}
function hexToFloat(hex) {
var s = hex >> 31 ? -1 : 1;
var e = (hex >> 23) & 0xFF;
return s * (hex & 0x7fffff | 0x800000) * 1.0 / Math.pow(2, 23) * Math.pow(2, (e - 127))
}
console.log(hexToFloat(flipHexString("0x0082d241", 8)));
console.log(hexToFloat(flipHexString("0x5d7e2842", 8)));

Generate a random math Equation using Random numbers and operators in Javascript

I want to create a program that should printout simplest form of mathematical expression like ( 21 + 13 ) * 56 using Random no. 1 to 100, The program must take a level parameter, the level determines the length of the generated equation, for example :
The game must produce equations with addition + and multiplication * operators like ( 21 + 13 ) * 56.(using Brackets)
----level 2
75 - 54 = 21
62 + 15 = 77
88 / 22 = 4
93 + 22 = 115
90 * 11 = 990
--level 3
( 21 + 13 ) * 56 = 1904
82 - 19 + 16 = 79
51 * ( 68 - 2 ) = 3366
Input would be form : for example
level 3
Output should be:
( 21 + 13 ) * 56 // Simple expression using Random no.s
So far i can create equations without brackets but i need help that would give me reliable solution
This is what i have done so far:
var input = 'level 3'
input = input.split(' ')
var n = Number(input[1])
var x = ['/','*','-','+']
function randomNumberRange(min, max) {
return Math.floor(Math.random() * (max - min) + min);
}
var a = ''
for(var i=0;i<n;i++){
if(i !== n-1){
var n1 = randomNumberRange(1, 100)
var m = randomNumberRange(0, x.length);
var str = x[m];
a += n1
a +=' '+str+' '
}else{
a += n1
}
}
I picked up the idea of #plamut to create a binary tree, where each node represents an operator with a left and a right side.
For instance, the equation 2 * (3 + 4) can be seen as
*
/ \
2 +
/ \
3 4
You can represent this quite straight forward using objects as follows:
var TreeNode = function(left, right, operator) {
this.left = left;
this.right = right;
this.operator = operator;
this.toString = function() {
return '(' + left + ' ' + operator + ' ' + right + ')';
}
}
Then you can create a recursive function to build such trees, where one sub-tree would have half of the desired total number of nodes (= length of equation):
function buildTree(numNodes) {
if (numNodes === 1)
return randomNumberRange(1, 100);
var numLeft = Math.floor(numNodes / 2);
var leftSubTree = buildTree(numLeft);
var numRight = Math.ceil(numNodes / 2);
var rightSubTree = buildTree(numRight);
var m = randomNumberRange(0, x.length);
var str = x[m];
return new TreeNode(leftSubTree, rightSubTree, str);
}
Here's a JSFiddle with a working example.
Maybe you still want to care about special cases, like avoiding brackets at top level, but that shouldn't be too hard from here.

Math operation in JS

my code is suppose to calculate mortgage payments
var LA = 100000;
var RA=0.07;
var YA=30;
var R = ( RA / 12);
var r = (1 + R);
var Yr = (YA * 12);
var pay = (LA * Math.exp(r,Yr)*R)/(Math.pow(r,Yr)-1);
returns $224.12
which is wrong it needs to be $665.30
payment = [ LA * r^Yr * R ] / [ r ^Yr - 1]
For example:
30 year mortgage for $100,000 at 7% interest (0.07)
0.07 / 12 = 0.00583 (this is R)
30 * 12 = 360 (this is Yr)
1 + 0.00583 = 1.00583 (this is r)
payment = [ $100,000 * (1.00583)^360 * 0.00583 ] / [ (1.00583)^360 - 1 ]
Monthly Payments will be $665.30
any tips?
Use the correct function: Math.pow and not Math.exp.
Also, although square brackets will work, it's only because JavaScript is casting the arrays to strings, and then to numbers. Use parentheses instead.

Resources