In Web 1.0 search engines retrieve macro contents. Search is very fast but many times results are inaccurate or more than users can chew.
In Web 2.0 search engines retrieve tags with micro contents (Furl even retrieves tags with macro contents). The process of tagging is manual, tedious and covers negligible percents of the WWW. Web 2.0 tags everything: pictures, links, events, news, Blogs, audio, video, and so on. Google Base even retrieves micro content texts.
In Web 3.0 search engines will hopefully retrieve micro content texts which were tagged automatically. This implies translating billions of Web 1.0 macro contents into micro contents. The result could be more precise search because tagging can solve part of the ambiguity that homonyms and synonyms introduce into the process of search.
I added some interesting excerpts I stumbled upon while preparing this posting:
What will Web 10.0 be...?
Your very experiences, your senses, perhaps even your thoughts, will be broadcast and archived for anyone to download and view.
Web 2.0 is the Community Web (for people: apps/sites connecting them).
Web 3.0 is the Semantic Web (for machines).
Web 2.0 and Web 3.0 are a fork we are moving into now, where one is focused on internet architectures for people/community/usability and the other is focused on internet architectures for machines.
Web 4.0 is when these technologies come together to form what I call the "Learning Web". This is moving more into the area of Artificial Intelligence.
The Learning Web is where the Web is actually learning by itself and is a user alongside human users, generating new ideas, information and products without direct human input. This may be possible on a large-scale when more sensors/actuators/semantic structure/Ontologies are advanced and in place someday (maybe 10-15 years).