because humans are not perfect – their products are not perfect.
nobody is god – meaning – nobody is perfect – nobody’s work is without error – capitalism, socialism, dictatorship, democracy… all man-made systems have errors.
So you can assume this article is also not perfect and i encourage you to contribute to bring it near the 99% perfection, that is possible.
Nobody is perfect – but who would like to be a nobody?
Everybody wants to be somebody to someone.
“Lonelyness is almost the opposite of happyness” (Manfred Spitzer) – so one defines oneself over relationships.
Being lonely is worse than smoking or fatigue!
So that’s why everyone would like to be a famous rockstar – politician or actor.
To have meaningful relationships with the rest of the world
English version: “I hope, that every single one can become rich and then have everything one ever wanted, so that one realizes, that this is not the answer.”
What is the difference between man and the animals?
Considering that 99% of the genome of pigs and humans is identical, it’s the
that is running on their hardware (brains) that makes us ask:
Also when it comes to software development… it’s boring beeing “alone” on a problem. it’s always more fun having a sort of competition who solves the problem first 😉 (unless one guy keeps winning all the time…)
Speaking of software – because humans are not perfect – their products are not perfect.
But there are
methods to error-correct-yourself
so that your product does not waste your customer’s valuable life-time (as bill gates and the Marketing-Guru (successor-CEO of Apple after Steve Job’s death (everybody misses your innovation massively!) has chosen) and they don’t need to get angry with you (more often than absolutely necessary).
SelfCorrecting Methods like test-documentation (manually testing your software if it can do all the demanded functionality) costs you a lot of time – but saves the customer a lot of time – hence: enshures good software quality.
Because bad-quality software steals our most valuable resource: human time.
And that should be a crime.
everyone likes to be a creator
in our deepest hearts and minds we know that we are born to something more than the repetitive, stupid, boring operating of a machine in a factory or office.
we want to be creators.
creators of our own life… of tools that improve our life and the life of our family and friends and everyone on this planet.
simplicity is magic – keep things simple = testable
the art of software engineering
delivering high quality software in-time and in-budget is the “holy grail” of software engineering.
it is a sport that is still under heavy development, because humans are not god, they make mistakes.
how to properly deal with this not-god property is the search for “the holy grail of software engineering – in-time + in-budget”.
it is still an envolving process where man experiences that nobody is perfect=nobody is god, and everyone has mental limits.
when you engage in programming, no matter what language, there are methods/workflows that have proven to produce good results.
here are my methods for dealing with that:
EXAMPLES + TECHNICAL TESTS
it’s quiet good idea to create a “EXAMPLE” project:
in order to test & train the functionalities you need.
test: is it working? how good is it working? is it fast/slow/reliable/unreliable?
get some test-data – works? now get massive-amount-of-test data (like 3 million MySQL records) – still works? good job!
train: is it working, how i think it is working?
THINK ON PAPER
atleast for me, when designing algorithms/programms for a certain problem/subproblem, i always use a blank paper and make a plan with a pencil.
strange enough, i can think much better/logical in this way and better analyse what the solution might be.
PLANS ARE THERE TO BE CHANGED
to have a plan / specification / requirements specification – book – sheed whatever is a good idea… but prepare to change it several times.
the waterfall-model assumes that 1. idea 2. planning 3. implementation 4. test 5. delivery to customers
can be seen as separate phases of the whole software engineering process.
they can not be separated.
because man is not perfect=god, man makes mistakes.
so everything that man produces has errors. (some so minor you won’t ever notice)
but when products become complex, they contain a lot of errors, which can make the whole product unusable = unstable = unreliable.
so during the planning phase → you might want to adapt/change your idea, during the implementation phase → you might want to adapt/change your plans, during the test phase (WORK WITH TEST-DOCUMENTATION!) → you certainly will find errors in your implementation → during the usage by a customer, the customer shurely will take your product to places you have never thought of, creating completely new usecases/problems that are not in your test documentation.
add them to your testdocumentation!
this is the way to high quality software. but it is a hard one.
it is crucial to have a document where all possible interactions/cases/usecases/problems that ever occured with the software are marked down.
when you make changes to the code, make shure to test all this possible usecases.
because one fix might brake something else.
there are automatic tests… this works fine for database-functions, but not for gui-user-interactions.
so there is still room for improvement and i hope man manages to fill this room… otherwise i think we are stuck in terms of how-complex and stable at the same time a software can be.
it is said that it takes 10-15 years for a software product to ripe.
this is a very long time.
software development is a long-term investment of money and the much more valuable human-lifetime resource and nerves.
i love open source.
because it is a gift to mankind.
every software problem solved the-open-source-way and in a language that is crossplattform for the next 100 years, can be considered “solved forever”.
it is no use doing the same thing over and over again.
do it once but do it right.
linus torvalds shares this view http://www.youtube.com/watch?v=4XpnKHJAok8 “open source is the way to do software right”.
linus torvalds also encourages to make a lot of smal programs and link them together (increasing possible reuse of the component) instead of making one big monolithic program that tries to “do it all” but fails often and miserably (windows).
when you are doing a software project, 9 out of 10 software projects do not “survive” in terms of money and market.
so the odds are pretty good your nice program that you spend so much valuable lifetime on “dissapears” from the screen and is lost for mankind.
this is sad sad sad and a waste of resources.
if you can not make your software open source….
… think about what OpenSource components your software would need to work.
and implement a lot of OpenSource-SubProjects/Components for mankind to reuse in making this world a better/safer/nicer place.
instead of chucking it all in the bin, you did something good, even when you fail money-wise. (really no shame, me and my college (10 years php programmer) failed on a 300.000€ online-accounting Project in Flash. (we did not know the whole budget in advance) … we learned a lot from that, but our customer was screwed.)
Kennedy once said: “an error only becomes a mistake – if you refuse to correct it”
====== WATERFALL MODEL ======
… is the idea you could really see planning, implementation, testing as separate phases and complete them separately. (you are perfect, you are god)
Unfortunately, planning errors become visible during implementation, Implementations error become visible during testing etc., etc.. i.e. an iterative approach which is not strictly separates these phases is needed.
When you have this iterative approach, nothing is yet “in-time” or “in-budget.”
But at least there is less blaming and destructiveness inside the team “You could, you should but that would not run like that.”
====== WASSERFALL-MODEL ======
… ist die Vorstellung man könne Planung, Implementierung, Test wirklich als separate Phasen sehen und abschließen. (Du bist perfekt also gott)
Leider werden Planungs-Fehler während der Implementierung sichtbar, Implementations-Fehler werden wärend des Tests sichtbar etc. pp. d.h. ein Iterativer Ansatz welcher nicht strikt zwischen diesen Phasen trennt ist stressfreier.
Deswegen ist noch nix “In-Time” oder “In-Budget”. Aber immerhin mehr produktivität und weniger Schuldzuweiseungen im Konjunktiv “hättest Du, Du solltest doch, wäre das nicht so gelaufen”.
the first 80 pages is just a general ramp-on what did go wrong and what can go wrong… (anything that can go wrong).
i hope it’s will guide me on a clear path how to avoid errors during software-planning and later in implementation.
how to build a team
====== SECRET SERVICE METHOLOGY OF SOFTWARE DEVELOPMENT ======
maybe we can learn from british secret services how to build proper software:
What was it, that made the iphone so successful?
It was the feeling you had a dead-simple, reliable, fast, stylish and high-quality software-hardware-combination and thrill to imagine what you could do with this sort of technology that extends your abilities.
Yeah and of course a little innovation… like two-finger zoom. That was the Innovation by Crazy Steve Jobs-Coolness-Factor. (949 multi-touch patent, sometimes known as the Steve Jobs patent)
Innovation is good and cool… but when it comes to everyday-use and if you would recommend this product to a friends, then you defnately want reliable, simple, fast software: high quality tested software.
Not some buggy beta version of something that gets you frustrated, breaks your wifi with a software-update. (IPhone 4S)
So in the long term a device is only successful if it has high quality software.
Because if software is dead-simple and high-quality… people adopt it/accept it.
If not. People will complain about your device and look for alternatives.
Btw. Apple has not only become less innovative but also more lazy on it’s software quality… they steer the microsoft way… making everyone run of to android.
Have a look at this little chart:
[Source: Iris Associates]
This is a chart showing the number of installed seats of the Lotus Notes workgroup software, from the time it was introduced in 1989 through 2000. In fact when Notes 1.0 finally shipped it had been under development for five years. Notice just how dang long it took before Notes was really good enough that people started buying it. Indeed, from the first line of code written in 1984 until the hockey-stick part of the curve where things really started to turn up, about 11 years passed. During this time Ray Ozzie and his crew weren’t drinking piña coladas in St Barts. They were writing code.
The reason I’m telling you this story is that it’s not unusual for a serious software application. The Oracle RDBMS has been around for 22 years now. Windows NT development started 12 years ago. Microsoft Word is positively long in the tooth; I remember seeing Word 1.0 for DOS in high school (that dates me, doesn’t it? It was 1983.)
To experienced software people, none of this is very surprising. You write the first version of your product, a few people use it, they might like it, but there are too many obvious missing features, performance problems, whatever, so a year later, you’ve got version 2.0. Everybody argues about which features are going to go into 2.0, 3.0, 4.0, because there are so many important things to do. I remember from the Excel days how many things we had that we just had to do. Pivot Tables. 3-D spreadsheets. VBA. Data access. When you finally shipped a new version to the waiting public, people fell all over themselves to buy it. Remember Windows 3.1? And it positively, absolutely needed long file names, it needed memory protection, it needed plug and play, it needed a zillion important things that we can’t imagine living without, but there was no time, so those features had to wait for Windows 95.
But that’s just the first ten years. After that, nobody can think of a single feature that they really need. Is there anything you need that Excel 2000 or Windows 2000 doesn’t already do? With all due respect to my friends on the Office team, I can’t help but feel that there hasn’t been a useful new feature in Office since about 1995. Many of the so-called “features” added since then, like the reviled ex-paperclip and auto-document-mangling, are just annoyances and O’Reilly is doing a nice business selling books telling you how to turn them off.
So, it takes a long time to write a good program, but when it’s done, it’s done. Oh sure, you can crank out a new version every year or two, trying to get the upgrade revenues, but eventually people will ask: “why fix what ain’t broken?”
Failure to understand the ten-year rule leads to crucial business mistakes.
Mistake number 1. The Get Big Fast syndrome. This fallacy of the Internet bubble has already been thoroughly discredited elsewhere, so I won’t flog it too much. But an important observation is that the bubble companies that were trying to create software (as opposed to pet food shops) just didn’t have enough time for their software to get good. My favorite example is desktop.com, which had the beginnings of something that would have been great if they had worked on it for 10 years. But the build-to-flip mentality, the huge overstaffing and overspending of the company, and the need to raise VC every ten minutes made it impossible to develop the software over 10 years. And the 1.0 version, like everything, was really morbidly awful, and nobody could imagine using it. But desktop.com 8.0 might have been seriously cool. We’ll never know.
Mistake number 2. the Overhype syndrome. When you release 1.0, you might want to actually keep it kind of quiet. Let the early adopters find it. If you market it and promote it too heavily, when people see what you’ve actually done, they will be underwhelmed. Desktop.com is an example of this, so is Marimba, and Groove: they had so much hype on day one that people stopped in and actually looked at their 1.0 release, trying to see what all the excitement was about, but like most 1.0 products, it was about as exciting as watching grass dry. So now there are a million people running around who haven’t looked at Marimba since 1996, and who think it’s still a dorky list box that downloads Java applets that was thrown together in about 4 months.
Keeping 1.0 quiet means you have to be able to break even with fewer sales. And that means you need lower costs, which means fewer employees, which, in the early days of software development, is actually a really great idea, because if you can only afford 1 programmer at the beginning, the architecture is likely to be reasonably consistent and intelligent, instead of a big mishmash with dozens of conflicting ideas from hundreds of programmers that needs to be rewritten from scratch (like Netscape, according to the defenders of the decision to throw away all the source code and start over).
Mistake number 3. Believing in Internet Time. Around 1996, the New York Times first noticed that new Netscape web browser releases were coming out every six months or so, much faster than the usual 2 year upgrade cycle people were used to from companies like Microsoft. This led to the myth that there was something called “Internet time” in which “business moved faster.” Which would be nice, but it wasn’t true. Software was not getting created any faster, it was just getting released more often. And in the early stages of a new software product, there are so many important things to add that you can do releases every six months and still add a bunch of great features that people Gotta Have. So you do it. But you’re not writing software any faster than you did before. (I will give the Internet Explorer team credit. With IE versions 3.0 and 4.0 they probably created software about ten times faster than the industry norm. This had nothing to do with the Internet and everything to do with the fact that they had a fantastic, war-hardened team that benefited from 15 years of collective experience creating commercial software at Microsoft.)
Mistake number 4. Running out of upgrade revenues when your software is done. A bit of industry lore: in the early days (late 1980s), the PC industry was growing so fast that almost all software was sold to first time users. Microsoft generally charged about $30 for an upgrade to their $500 software packages until somebody noticed that the growth from new users was running out, and too many copies were being bought as upgrades to justify the low price. Which got us to where we are today, with upgrades generally costing 50%-60% of the price of the full version and making up the majority of the sales. Now the trouble comes when you can’t think of any new features, so you put in the paperclip, and then you take out the paperclip, and you try to charge people both times, and they aren’t falling for it. That’s when you start to wish that you had charged people for one year licenses, so you can make your product a subscription and have permission to keep taking their money even when you haven’t added any new features. It’s a neat accounting trick: if you sell a software package for $100, Wall Street will value that at $100. But if you can sell a one year license for $30, then you can claim that you’re going to get recurring revenue of $30 for the next, say, 10 years, which is worth $200 to Wall Street. Tada! Stock price doubles! (Incidentally, that’s how SAS charges for their software. They get something like 97% renewals every year.)
The trouble is that with packaged software like Microsoft’s, customers won’t fall for it. Microsoft has been trying to get their customers to accept subscription-based software since the early 90′s, and they get massive pushback from their customers every single time. Once people got used to the idea that you “own” the software that you bought, and you don’t have to upgrade if you don’t want the new features, that can be a big problem for the software company which is trying to sell a product that is already feature complete.
Mistake number 5. The “We’ll Ship It When It’s Ready” syndrome. Which reminds me. What the hell is going on with Mozilla? I made fun of them more than a year ago because three years had passed and the damn thing was still not out the door. There’s a frequently-obsolete chart on their web site which purports to show that they now think they will ship in Q4 2001. Since they don’t actually have anything like a schedule based on estimates, I’m not sure why they think this. Ah, such is the state of software development in Internet Time Land.
But I’m getting off topic. Yes, software takes 10 years to write, and no, there is no possible way a business can survive if you don’t ship anything for 10 years. By the time you discount that revenue stream from 10 years in the future to today, you get bupkis, especially since business analysts like to pretend that everything past 5 years is just “residual value” when they make their fabricated, fictitious spreadsheets that convince them that investing in sock puppets at a $100,000,000 valuation is a pretty good idea.
Anyway, getting good software over the course of 10 years assumes that for at least 8 of those years, you’re getting good feedback from your customers, and good innovations from your competitors that you can copy, and good ideas from all the people that come to work for you because they believe that your version 1.0 is promising. You have to release early, incomplete versions — but don’t overhype them or advertise them on the Super Bowl, because they’re just not that good, no matter how smart you are.
Mistake number 6. Too-frequent upgrades (a.k.a. the CorelSyndrome). At the beginning, when you’re adding new features and you don’t have a lot of existing customers, you’ll be able to release a new version every 6 months or so, and people will love you for the new features. After four or five releases like that, you have to slow down, or your existing customers will stop upgrading. They’ll skip releases because they don’t want the pain or expense of upgrading. Once they skip a release, they’ll start to convince themselves that, hey, they don’t always need the latest and greatest. I used Corel PhotoPaint 6.0 for 5 years. Yes, I know, it had all kinds of off-by-one bugs, but I knew all the off-by-one bugs and compensated by always dragging the selection one pixel to the right of where I thought it should be.
Make a ten year plan. Make sure you can survive for 10 years, because the software products that bring in a billion dollars a year all took that long. Don’t get too hung up on your version 1 and don’t think, for a minute, that you have any hope of reaching large markets with your first version. Good software, like wine, takes time.
This is said to be the industry’s default bibel of software engineering… well… it’s a lot of paper.
Software Engineering – Die SE-Bibel für Lehre und Praxis
Die Deutsche Übersetzung ist leider voller Fehler. Besser die Englische nehmen?
Ian Sommerville (Autor)
about the author:
http://iansommerville.com/techstuff/ “It’s going to be hard to build systems for digital government”
From the content: https://en.wikipedia.org/wiki/Formal_specification
In computer science, formal specifications are mathematically based techniques whose purpose are to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools. These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information.
In each passing decade computer systems have become increasingly more powerful and as a result they have become more impactful to society. Because of this, better techniques are needed to assist in the design and implementation of reliable software. Established engineering disciplines use mathematical analysis as the foundation of creating and validating product design. Formal specifications are one such way to achieve this in software enginering reliability as once predicted. Other methods such as testing are more commonly used to enhance code quality.
Testing finds errors (or bugs) in the implementation. It is best to find these as early as possible because the farther along in a project a bug is found, the more costly it is to fix. The idea with formal specifications is to minimize the creation of such errors. This is done by reducing the ambiguity of informal system requirements. By creating a formal specification, the designers are forced to make a detailed system analysis early on in the project. This analysis will usually reveal errors or inconsistencies that exist in the informal system requirements. As a result the chance of subtle errors being introduced and going undetected in complex software systems is reduced. Finding and correcting these kinds of errors early in the design stage will help to prevent expensive fixes that may arise in the future.
Testing and QA contribute to more than 50% of the total development cost of some projects; through the use of formal specifications certain testing processes may be automated leading to better and more cost-effective testing.
Given such a specification, it is possible to use formal verification techniques to demonstrate that a system design is correct with respect to its specification. This allows incorrect system designs to be revised before any major investments have been made into an actual implementation. Another approach is to use provably correct refinement steps to transform a specification into a design, which is ultimately transformed into an implementation that is correct by construction.
It is important to note that a formal specification is not an implementation, but rather it may be used to develop an implementation. Formal specifications describe what a system should do, not how the system should do it.
A good specification must have some of the following attributes: adequate, internally consistent, unambiguous, complete, satisfied, minimal 
A good specification will have:
- Constructability, manageability and evolvability
- Powerful and efficient analysis
One of the main reasons there is interest in formal specifications is that they will provide an ability to perform proofs on software implementations. These proofs may be used to validate a specification, verify correctness of design, or to prove that a program satisfies a specification.
A design (or implementation) cannot ever be declared “correct” on its own. It can only ever be “correct with respect to a given specification”. Whether the formal specification correctly describes the problem to be solved is a separate issue. It is also a difficult issue to address, since it ultimately concerns the problem constructing abstracted formal representations of an informal concrete problem domain, and such an abstraction step is not amenable to formal proof. However, it is possible to validate a specification by proving “challenge” theorems concerning properties that the specification is expected to exhibit. If correct, these theorems reinforce the specifier’s understanding of the specification and its relationship with the underlying problem domain. If not, the specification probably needs to be changed to better reflect the domain understanding of those involved with producing (and implementing) the specification.
Formal methods of software development are not widely used in industry. Most companies do not consider it cost-effective to apply them in their software development processes. This may be for a variety of reasons, some of which are:
- High initial start up cost with low measurable returns
- Limited scope 
- Not cost-effective
- This is not entirely true, by limiting their use to only core parts of critical systems they have shown to be cost-effective
- Low-level ontologies
- Poor guidance
- Poor separation of concerns
- Poor tool feedback
Formal specification techniques have existed in various domains and on various scales for quite some time. Implementations of formal specifications will differ depending on what kind of system they are attempting to model, how they are applied and at what point in the software life cycle they have been introduced. These types of models can be categorized into the following specification paradigms:
- History-based specification 
- behavior based system histories
- assertions are interpreted over time
- State-based Specification 
- behavior based on system states
- series of sequential steps, (e.g. a financial transaction)
- languages such as Z, VDM or B rely on this paradigm 
- Transition-based specification 
- behavior based on transitions from state-to-state of the system
- best used with a reactive system
- languages such as Statecharts, PROMELA, STeP-SPL, RSML or SCR rely on this paradigm 
- Functional specification 
- specify a system as a structure of mathematical functions
- OBJ, ASL, PLUSS, LARCH, HOL or PVS rely on this paradigm 
- Operational Specification 
- early languages such as Paisley, GIST, Petri nets or process algebras rely on this paradigm 
In addition to the above paradigms there are ways to apply certain heuristics to help improve the creation of these specifications. The paper referenced here best discusses heuristics to use when designing a specification. They do so by applying a divide-and-conquer approach.
The Z notation is an example of a leading formal specification language. Others include the Specification Language(VDM-SL) of the Vienna Development Method and the Abstract Machine Notation (AMN) of the B-Method. In the Web services area, formal specification is often used to describe non-functional properties  (Web services Quality of Service).
Some tools are:
For implementation examples, refer to the links in Software Tools section.
- Algebraic specification
- Formal methods
- Specification (technical standard)
- Software engineering
- Specification language
- ^ a b c d Hierons, R. M.; Krause, P.; Lüttgen, G.; Simons, A. J. H.; Vilkomir, S.; Woodward, M. R.; Zedan, H.; Bogdanov, K.; Bowen, J. P.; Cleaveland, R.; Derrick, J.; Dick, J.; Gheorghe, M.; Harman, M.; Kapoor, K. (2009). “Using formal specifications to support testing”. ACM Computing Surveys 41 (2): 1. doi:10.1145/1459352.1459354. edit
- ^ a b c d e Gaudel, M. -C. (1994). “Formal specification techniques”. Proceedings of 16th International Conference on Software Engineering. pp. 223–223. doi:10.1109/ICSE.1994.296781. ISBN 0-8186-5855-X. edit
- ^ a b c d e f g h i j k l m n o Lamsweerde, A. V. (2000). “Formal specification”. Proceedings of the conference on the future of Software engineering – ICSE ’00. p. 147. doi:10.1145/336512.336546. ISBN 1581132530. edit
- ^ a b c d e Sommerville, Ian (2009). “Formal Specification”. Software Engineering. Retrieved 3 February 2013.
- ^ a b c Nummenmaa, Timo; Tiensuu, Aleksi; Berki, Eleni; Mikkonen, Tommi; Kuittinen, Jussi; Kultima, Annakaisa (4 August 2011). “Supporting agile development by facilitating natural user interaction with executable formal specifications”. ACM SIGSOFT Software Engineering Notes 36 (4): 1–10. doi:10.1145/1988997.2003643.
- ^ a b van der Poll, John A.; Paula Kotze (2002). “What design heuristics may enhance the utility of a formal specification?”. Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on Enablement through technology. SAICSIT ’02: 179–194.
- ^ S-Cube Knowledge Model: Formal Specification
- A Case for Formal Specification (Technology) by Coryoth 2005-07-30
- Formal Specification
because of you bravery of reading all of this …
other users’s philosophy:
The GMPG was founded on the following principles:
Implementations of protocols should be encouraged to interoperate.
Thus GMPG has chosen to use the (cc) nd license restriction for its protocols and formats to reduce mutability into non-interoperable forms.
Human user centrism
- Humans (especially users) first, machines second.
- Technologies must be first designed for ease of use (including authoring) and human understanding, and only second for ease of development and machine understanding.
- Community contribution
- Contributing to the community has wide ranging positive effects.
The Creative Commons is devoted to expanding the range of creative work available for others to build upon and share.Thus GMPG has chosen to share all finished protocols and formats with the (cc) Attribution-NoDerivs 1.0 (by-nd) license. All finished examples and samples are shared with the more liberal (cc) Attribution 1.0 (by) license.
Similar to the W3C’s principle of “consensus”, the GMPG’s designs/decisions are made by unanimity among the Founders.
The GMPG believes there are many opportunities to launch new efforts, and thus encourages others to do so as well.
Feel free to use this set of principles as a starting point, it is licensed under the Creative Commons (by) License which of course allows for derivative works.
Profit and prosper
Enable people to build and sell products, without obligating them to divulge their intellectual property.
Thus GMPG does not include the (cc) nc license restriction on any of its efforts, nor does it contain any so-called “viral” provisions. May you profit and prosper.
Inspirations and sources
Here are a few of the inspirations and sources for many of these principles.
Note that many of these sources contain many other principles, some of which were perhaps not important enough, and some of which could even be considered counter-principles.
It is left as an exercise to the reader to determine which are which.
In no particular order:
- Creative Commons
- Tim Berners-Lee:
- Bert Bos: An essay on W3C’s design principles
- F. Heylighen: Occam’s Razor
This web page is licensed under a Creative Commons License.
GMPG (what does this stand for?)
The links in Joe’s blogroll would look something like this:
<a href="http://dave-blog.example.org/" <strong>rel="friend met"</strong>>Dave</a>
<a href="http://darryl-blog.example.org/" <strong>rel="friend met"</strong>>Darryl</a>
<a href="http://james-blog.example.com/" <strong>rel="met"</strong>>James Expert</a>