Like just about everything in programming it seem to be a tradeoff.
In Java the try/catches are (a) implicitly "registered" on the call stack, and when an exception is thrown (b) then the runtime has to do a little analysis to determine what to do.
In Lisp, the handlers are (a) explicitly registered up front and when an "exception" is thrown (b) it just uses the most recently registered handler.
(a) is a much more common task than (b), so that's where you should optimize. But I don't know enough enough about Lisp (or Java) to know how these are implemented. If it's just pushing/popping on a stack then maybe it's just as fast.
In Java the try/catches are (a) implicitly "registered" on the call stack, and when an exception is thrown (b) then the runtime has to do a little analysis to determine what to do.
In Lisp, the handlers are (a) explicitly registered up front and when an "exception" is thrown (b) it just uses the most recently registered handler.
(a) is a much more common task than (b), so that's where you should optimize. But I don't know enough enough about Lisp (or Java) to know how these are implemented. If it's just pushing/popping on a stack then maybe it's just as fast.