Friday, May 16, 2008
Who wants to be a developer?
I enjoy reading Joel Spolsky's blog, Joel on Software. Recently, he wrote a humerous article about IE 8, and although it was funny (he often is), it gave a critical insight into the state of software development as it current stands.
If you have 10 minutes, do yourself a favour and read Joel's article. I hate being a doomsayer, but fore warning is fore knowledge as the saying goes, or as Proverbs puts it, "A wise man sees trouble afar off and prepares himself".
If you have 10 minutes, do yourself a favour and read Joel's article. I hate being a doomsayer, but fore warning is fore knowledge as the saying goes, or as Proverbs puts it, "A wise man sees trouble afar off and prepares himself".
Labels: Development
Comments:
<< Home
Hahahahahaha
Reminds me of the decision to include memory protection as an integral part of operating system(s)
This was done to accommodate all the broken code that’s around. If you were lucky enough to develop on and use Amigas when they were in their heyday, you’d have come across a developers tool called enforcer. The Amiga was lightening quick (my 50Mhz A1200 is still good for all kinds of MIDI sequencing) and the windowing interface is still to this day far faster (near realtime) to respond than Vista, XP, 2000, or 98 – it was an excellent multitasker to a large degree because memory protection was a development tool, and NOT something that the OS needed to be stable.
It was a tool to debug your software. If your code is good, when you test it with enforcer it registers no memory protection faults and it can then run without memory protection.
Now we’ve gone a step further and we have data execution prevention. This is something that, like enforcer, does not need to be included, except for the fact that we expect software to be poorly written. Because we don’t debug properly, we need a service running (and CPU hardware to support it) to fix the problems that should have been fixed while the software was written.
Crazy. Open standards will win out in the end, I feel. Possibly at the significant expense of MS market share. End users are fed up with it. Developers are fed up with it, and MS is the primary culprit by pushing modified versions of open standards out and telling everyone that’s better. It’s not.
Another example of this kind of development insanity is proxy autodetection under MS. The MS DHCP server broadcasts an incorrect (+1) string length for the URL. Instead of fixing the DHCP server, MS “fixed” the client, making it subtract 1 from the incorrect string length to get the correct value.
So anyone who wants to implement this using the Internet Standard Committee’s DHCP server, which behaves exactly the same regardless of the OS it’s compiled on, has to add a trailing whitespace to the string so that MS clients don’t truncate by 1 character the URL string for which the ISC server correctly broadcasts the length.
What really gets me is that so many people defend microsoft’s development practices, even though it’s all closed and they couldn’t possibly know what goes through MS developers heads but they defend it anyway; they’ve been brainwashed. It’s better because it’s MS – they’re the “standard” (ha HA!)
It’s not better. It’s never been better. We’ve all been just a bit scared of anything different. Self-reinforcing market share. Till now. People are finally starting to buck, and now we are reaching a point where all the roosters are coming home to roost.
These issues will need to be addressed, and that might mean slowing down development until we have standards that work. It’s not like we don’t have enough functionality now. Most people use about 1% of any given software feature set; Why on earth do we keep adding more features when we haven’t got the last set right?
Will be a long and difficult road to genuine standards based computing though methinks. The #1 thing that has to stop is the kind of fixes demonstrated by the proxy autodetect bug. Stop doing that kind of insanity and debug software properly and standards will eventually become real standards that can be relied upon.
J
P.S. Hardware manufacturers don't get a free lunch on the issue either : nVidia take note : You may do great video chipsets, but if you can't make a reasonable storage controller, please buy a design from someone who can....
Post a Comment
Reminds me of the decision to include memory protection as an integral part of operating system(s)
This was done to accommodate all the broken code that’s around. If you were lucky enough to develop on and use Amigas when they were in their heyday, you’d have come across a developers tool called enforcer. The Amiga was lightening quick (my 50Mhz A1200 is still good for all kinds of MIDI sequencing) and the windowing interface is still to this day far faster (near realtime) to respond than Vista, XP, 2000, or 98 – it was an excellent multitasker to a large degree because memory protection was a development tool, and NOT something that the OS needed to be stable.
It was a tool to debug your software. If your code is good, when you test it with enforcer it registers no memory protection faults and it can then run without memory protection.
Now we’ve gone a step further and we have data execution prevention. This is something that, like enforcer, does not need to be included, except for the fact that we expect software to be poorly written. Because we don’t debug properly, we need a service running (and CPU hardware to support it) to fix the problems that should have been fixed while the software was written.
Crazy. Open standards will win out in the end, I feel. Possibly at the significant expense of MS market share. End users are fed up with it. Developers are fed up with it, and MS is the primary culprit by pushing modified versions of open standards out and telling everyone that’s better. It’s not.
Another example of this kind of development insanity is proxy autodetection under MS. The MS DHCP server broadcasts an incorrect (+1) string length for the URL. Instead of fixing the DHCP server, MS “fixed” the client, making it subtract 1 from the incorrect string length to get the correct value.
So anyone who wants to implement this using the Internet Standard Committee’s DHCP server, which behaves exactly the same regardless of the OS it’s compiled on, has to add a trailing whitespace to the string so that MS clients don’t truncate by 1 character the URL string for which the ISC server correctly broadcasts the length.
What really gets me is that so many people defend microsoft’s development practices, even though it’s all closed and they couldn’t possibly know what goes through MS developers heads but they defend it anyway; they’ve been brainwashed. It’s better because it’s MS – they’re the “standard” (ha HA!)
It’s not better. It’s never been better. We’ve all been just a bit scared of anything different. Self-reinforcing market share. Till now. People are finally starting to buck, and now we are reaching a point where all the roosters are coming home to roost.
These issues will need to be addressed, and that might mean slowing down development until we have standards that work. It’s not like we don’t have enough functionality now. Most people use about 1% of any given software feature set; Why on earth do we keep adding more features when we haven’t got the last set right?
Will be a long and difficult road to genuine standards based computing though methinks. The #1 thing that has to stop is the kind of fixes demonstrated by the proxy autodetect bug. Stop doing that kind of insanity and debug software properly and standards will eventually become real standards that can be relied upon.
J
P.S. Hardware manufacturers don't get a free lunch on the issue either : nVidia take note : You may do great video chipsets, but if you can't make a reasonable storage controller, please buy a design from someone who can....
Subscribe to Post Comments [Atom]
<< Home
Subscribe to Posts [Atom]