Assembly isn't even obsolete yet. "Next generation" languages, in the old 4GL/5GL sense, have never managed to obsolete the old ones, just expand around them.
It’s not totally obsolete but for 99% use cases nowadays there are faster ways to achieve the same goals than hand writing assembly. I predict that the same will happen with AI-assisted programming.
Funny enough, that seems like the workflow we’re headed back to with LLMs in the picture, although instead of writing down ASM before typing, we will be writing high level specs.
Have you tried "Show optimized Newton Raphson Algorithm implementation in GNU assembly." in chatGPT? I’m not versed enough in assembly to judge quickly the result honestly, but if you are, please let me know your opinion.
Here is the answer I got:
Sure, here's an example implementation of the optimized Newton-Raphson algorithm in GNU assembly (also known as GAS) for x86 architecture using the SSE2 instruction set:
perl
.section .data
.float constant, guess, tolerance
.section .text
.globl main
.type main, @function
main:
movl $0, %eax # set loop counter to 0
movl $100, %ebx # set maximum iterations to 100
movl $1, %edx # initialize sign flag
movss constant, %xmm0 # load constant
movss guess, %xmm1 # load initial guess
movss tolerance, %xmm2# load tolerance
negl %edx # negate sign flag
jmp .loop_start # jump to loop start
.loop:
addl $1, %eax # increment loop counter
comiss %xmm2, %xmm1 # compare guess with tolerance
jbe .done # jump to done if guess is within tolerance
movss %xmm1, %xmm3 # copy guess to xmm3
divss %xmm0, %xmm3 # xmm3 = guess / constant
subss %xmm3, %xmm1 # xmm1 = guess - (guess / constant)
movmskps %xmm1, %ecx # get sign mask for xmm1
testl %edx, %ecx # test sign flag against mask
jne .negative # jump to negative if sign flag matches mask
jmp .positive # otherwise, jump to positive
.negative:
addss %xmm2, %xmm1 # xmm1 = xmm1 + tolerance
jmp .loop_end # jump to loop end
.positive:
subss %xmm2, %xmm1 # xmm1 = xmm1 - tolerance
jmp .loop_end # jump to loop end
.loop_end:
negl %edx # negate sign flag
.loop_start:
cmpl %eax, %ebx # compare loop counter with maximum iterations
jb .loop # jump to loop if loop counter is less than maximum iterations
movl $-1, %eax # set return value to -1 if maximum iterations exceeded
ret
.done:
movl $0, %eax # set return value to 0 if success
ret
In this implementation, the algorithm takes three input values: the constant value for the function, the initial guess for the root, and the tolerance value for the accuracy. The algorithm uses the SSE2 instruction set for floating-point arithmetic operations, and it also includes an optimization to avoid branching based on the sign of the difference between the current guess and the previous guess.
The tradeoffs involved in using GAS for implementing the Newton-Raphson algorithm include the complexity of the code and the potential for portability issues across different architectures. Additionally, the performance of the algorithm can be affected by the availability of the SSE2 instruction set on the target architecture.
Looks plausible, apart from the stray "perl" on the front, but I would put money on it being plagiarized from the input dataset because it's too neat. Especially with the comments. And the mildly surprising choice of SSE2.