Understanding database performance, and predictig what will make an impact, is tricky, because there are lots of moving parts. I’m guessing you tested on your developement machine, perhaps running the Database locally, or perhaps on the LAN. I’m guessing the database was doing very little at the time of the test (other than your test).
Of course conditions out in the field vary wildly, and each difference has an impact. The quality of the server machine (Ram CPU etc), the overall speed and conjestion of the network, the performance of the hard drives, the load being placed on the server, the number of client programs and so on all have an impact.
So there are two approaches to performance.
The first is figure out when and where it is slow, and spend time optimising that. This is an efficient use of your time because you are attacking meaningful issues. It can be a bit of wack-a-mole but over time, generally, the program will feel speedy. (And you’ll likely learn some new techniques along the way.)
The second approach is to code “efficiently”. Meaning that you write code which is theoretically performant, but which you may never see the result of. For example, the advantage of a VIEW is requesting fewer fields from the server. (The write back is equally efficient regardless of whether you are putting a FILE or VIEW so that’s not in play here.) The read effecieny will really only be obvious when the database resources are constrained, the database is under load, or the network is constrained.
Most programmers start out with approach 1, learning as they go. Over time they develop habits which bring more and more of approach 2 into play. approach 1 never goes away, but it becomes the exception not the rule.
Hence the advice for stored procedures. They take the network out of play. They take the client machine out of play, and move the code “close” to the data. In most places it’s good general advice. However (as mentioned above) there are some downsides. Indeed almost all optimisation comes with some sort of downside.
For example switching to a Clarion VIEW from a FILE loop comes at a cognitive cost for the programer. FILE loops are so easy that we just got used to them, and seldom moved to VIEWs. (The new drivers acknowledge this, and move all the VIEW efficiency back to the FILE so one can still write FILE loops, but at the same time be efficient.)
Ultimately though, customers will tell us where things are slow on their server, with their network, and then we can figure out new approaches in those specific situations.