Here's a paradox that I think most mathematicians will instantly understand but that greatly puzzles everybody else I've tried telling it to, by UCI philosopher of science Jeff Barrett and his coauthor Frank Artzenius.

You start with an infinite pile of dollar bills in front of you. They have integer serial numbers, and are numbered sequentially with the number one at the top of the pile, but they all have the same value. You're going to make an infinite sequence of moves, with the goal of collecting as much money as possible.
On move n, you have two choices: you can either just take a single bill from the top of the pile, or you can take a larger number of bills, 2^(n+1). (4 on the first move, 8 on the second move, 16 on the third, and so on.) But whenever you take the larger number of bills, you then have to burn the smallest-numbered bill that you have, so your profit is a little smaller: 3, 7, 15, etc.

The question is: what is your optimal strategy?

Clearly, at each step, taking the larger number of bills maximizes your profit.
But if you do this every time (or even infinitely many times), you will end up with nothing: every bill you take will eventually be burned. So it's better just to take one bill at each step, ending with all the bills in your hand and nothing burned.

There's been a bunch of follow-up work trying to explain why this isn't a paradox at all (and really it isn't; the mathematics of it is perfectly clear). But it seems genuinely confusing to a lot of people. And Barrett uses it to argue a more serious point: when we try to define what is rational decision-making, we have to look at the whole context of the world in which the decisions are made and how they fit together; it doesn't work to look at each individual decision as rational on its own merely because it maximizes utility.