Jacobian transpose vs pseudoinverse

Let’s derive both celebrated algorithms from first order principles to see how they differ from each other.

Jacobian transpose. Let xd be the desired state of the end-effector, x(q) its current state corresponding to the joint configuration q, and let J denote the Jacobian x/q. Naive minimization of the error in the Cartesian space

L[q]=12(xdx(q))T(xdx(q))

leads to the gradient descent algorithm

qnext=q+αJT(xdx(q)),

where α is the step size.

Jacobian pseudoinverse. Consider a small change in the joint configuration Δq. It will lead to a small change in the state of the end-effector Δx=JΔq. Since we want the end-effector to move in the direction of xd, let’s assume that Δx=α(xdx) with some positive α. Question: “What is the smallest Δq that accomplishes that?” It is well-known that the minimum norm solution to an underdetermined system of linear equations is given by the pseudoinverse. Thus, we arrive at the following gradient descent algorithm:

qnext=q+αJ+(xdx(q)),

where J+=JT(JJT)1 if J has full row rank (which is ususally the case).

Comparison. Note that both algorithms have the form Δq=JTΔx. The difference between them is in the choice of Δx. Jacobian transpose algorithm sets the step in the Cartesian space proportional to the distance to the goal Δx=α(xdx(q)), whereas Jacobian pseudoinverse accounts for the curvature of the space in addition, Δx=(JJT)1α(xdx(q)).