First, I think the buffer settings in ASIO4ALL is not the only parameter that impacts latency. The actual hardware (your soundcard) can also add to this, so even if one person's buffer of 144 samples at, say, 44.1 kHz, would yield a theoretical minimum latency of 3.3 mS (assuming single-buffering), it might be more than this for some other person's hardware. You have to measure it in order to actually detect the real latency. You may be surprised, as I have been...
Secondly, there is a difference between what small levels we can detect initially and what we actually very quickly adjust to.
When I started playing my first board, a JV-80, I initially tried to avoid using it in performance mode as it had a perceptible delay compared to simple play mode... I even talked to Roland about this, but never met anyone else that had noticed this problem. Also today, when playing one keyboard via MIDI, I can still notice a slight "hesitation" compared to when I use the built-in sounds of the same keyboard.
However, in reality, this has never been a problem for me in real life situations. I just tell myself to forget and after a few minutes, I have adjusted to the playing.
Thirdly, you could validly ask what is the optimal latency.
Many would say 0 mS, but that is not so and not what you're used to...
Research shows that a real piano has a latency difference of 30mS or more between playing piano (quietly) and staccato forte notes (1); some quote a slightly later version of this work as reaching a conclusion of 30mS latency for staccato forte notes and 100mS for piano notes (2). My current DP in fact has a setting that can be used to impact how the delay varies with the playing levels, to mimic a real piano.
Further, sound travels at roughly 1/3 a meter pr. mS. In a symphony orchestra setting, this means there will be delays of as much as 40 mS between members of the orchestra. If your speaker is 2 meters away, there is already a latency of 6 mS, so instead of reducing your PC latency, one could use headphones instead, at least in the studio?
Lastly, too short latency impacts us negatively... Research (3) have shown an optimal latency from one player to the next to be around 11.5 mS. If less than that, we tend to speed up instead of playing in sync.
We are very good at keeping time. Trained people can probably keep a steady rhythm within a 4mS "window"; some even less. But we can adapt to a wide range of delays, and other research seems to suggest that the range of -25mS to +42mS is perceived by most as "simultaneous".
If you're also a bass player, remember that to even detect the pitch of a low A (110 Hz) likely require at least 10mS before even a full cycle has been produced.
So, as musicians, and in practical terms, I would say that latencies less than 20 mS is something most of us can unconsciously adopt to (unless you try to think too hard of it). I'm probably quite sensitive to latencies myself, but have learned that the benefits of trying to get to 3mS is not worth the pain; I can play well in time with latencies of 10mS or more.
Now, the untold story is jitter... which is the random variations in latency. It is also related to the buffer size. As you produce sound, it gets stored to a buffer. When that buffer is full, it is sent to be output. If you happen to create a sound right as the buffer is about to be filled, you will have a maximum delay of one buffer. If it happens right as you're about to fill the last part of the buffer, there is virtually no (extra) delay. So your buffer setting probably becomes equal to the jitter -- and that should in practice be kept as low as possible. The results of (2) suggest they should be kept less than 6mS.
Apologies for the long and boring scientific approach here!