Monday, July 7, 2008

Yahoo! Still Trying To Catch Reggie Bush

Yahoo! Still Trying To Catch Reggie Bush

If it's a Thursday, that must mean it's time for another
Yahoo !Sports investigation into gifts Reggie Bush might have received at USC. (We don't mean to make light of the hard work Yahoo! has put into its exhaustive investigation, but we repeat that it's hard to take an investigation all that seriously when the journalistic institution in question requires an exclamation point at the end of its name.)


Friest Yahoo Detect Invisible Users On Yahoo Messenger

This is the most simple and 100% working and tested method of detecting invisble users on yahoo messenger.

Just follow these simple instructions.

1--> Go to this website , xebeer - Detect Invisible users on Yahoo Messenger.
2--> There you can see the text box. Enter the users Yahoo ID and click on search symbol.
3--> The result will be out in a minute along with avatar of the person as shown in the figure .

Try this out . Its simple and it really works . There is no need of installing anything for this.

You can also detect a invisible user via Vngrabber or

UPDATE: Xebeer server is down most of the times, so here are some new ways to detect invisible users on yahoo. smile_teeth

1. --> is a professional online service which can help you to see who is online, offline or invisible in Yahoo.

2. Yahoo Status Checker --> The results are very accurate.

3. Buddy Spy --> Buddy Spy is a software which offers you the ability to see if your friend is truly online or if they are invisible.


History Of Operating Systems From Microsoft

Posted By Ankit Agrawal On Thursday, November 15, 2007

Above is a video with a lot of screenshots from past versions of Windows. It also gives you the history of Windows startup sounds once you get to Windows 3.1 and beyond.

Here is a Windows timeline of releases that span more than 20 years:

–Release Dates–

  • Windows 1.0 - November 20, 1985
  • Windows 2.0 - December 9, 1987
  • Windows 3.0 - May 22, 1990
  • Windows 3.1 - April 6, 1992
  • Windows NT 3.1 - July 27, 1993
  • Windows 95 - August 24, 1995
  • Windows NT 4.0 - August 24, 1996
  • Windows 98 - June 25, 1998
  • Windows 98 SE - May 9, 1999
  • Windows 2000 - February 17, 2000
  • Windows Me - September 14, 2000
  • Windows XP - October 25, 2001
  • Windows Server 2003 - April 25, 2003
  • Windows XP Media Center Edition 2003 - December 18, 2003
  • Windows XP Media Center Edition 2005 - October 12, 2004
  • Windows XP Professional x64 Edition - April 25, 2005
  • Windows Vista (corporate launch) - November 30, 2006
  • Windows Vista (consumer launch) - January 30, 2007
Microsoft already has a slightly outdated history page


Convert Images Into Vectors

Posted By Ankit Agrawal On Thursday, November 15, 2007
Firefox Vector
Original image on the left, vectorized on the right

Have you ever had a photo that you wish you could convert to a much larger size? Doing something like that would require a vector image, which is a graphic format that has no size limitation.

You’re in luck because there is a new service called VectorMagic, that will take your regular images and do its best to convert it into a vector graphic. The conversion process is normally rather quick, and it only asks you a few questions.

After it’s done you’ll be able to download the vector image in a variety of lossless formats including PNG, SVG, and EPS.

Related Link: VectorMagic


Online Text To Speech Convertor

Posted By Ankit Agrawal On Thursday, November 15, 2007
VozMe is a online text to seech coverter that allows you to instantly convert any English (or Spanish) text to an audio.

It also allows to save the converted file in mp3 format.

Related Link: Vozme


Virus Total Web Based Virus And Spyware Scanner

Posted By Ankit Agrawal On Thursday, November 15, 2007
Virus total is web based virus and spyware scanner that allows you to check file for viruses against 30 major virus databases.

Lets you either upload file online or send it by email.

Read more: VirusTotal - Online Virus and Spyware Scanner.


TechCruser - The Tech Navigator | Version 0.2 Beta | Created and Maintained By Ankit Agrawal
Best Viewed in Firefox 1.5+ at 1024x768 Or Higher Resolution
©2007 Copyrighted Under Creative Commons License


iProspect's Search Engine Marketing Technology

iProspect's Search Engine Marketing Technology

Search engine marketing technology is constantly evolving. The search engines utilize highly complex algorithms that are frequently modified and improved upon to rate and rank websites. Paid search programs offered by the engines are also continuously updated with new features and functionality. And new automated bidding and feed-based technologies are being developed, both by the engines and search marketing vendors, to facilitate your participation in paid search programs. How can you keep up with the pace of all this change? Which search engine marketing technology and what programs do you need to help you meet your marketing goals?

Click below to hear Brian Kaminski, Managing Director, iProspect San Francisco, talk about our technology.

iProspect is the Original® Search Engine Marketing Firm not only because we were the first search engine marketing-only firm, but because of our cutting-edge tools and proven campaign tracking capabilities that help our clients achieve the best results possible. Our smart and efficient search engine marketing technology, suited both for natural and paid search as well as campaign analysis and reporting, is crucial to our ability to manage your campaign, and to quickly adapt to changes made by the search engines. It also allows us to quickly automate the management of new programs offered by the search engines once they are launched. By taking advantage of this data-intensive and robust search engine marketing technology you enjoy superior return on your online marketing investment.

Your campaign benefits from our:
  • Robust technology staff, responsible for building, maintaining and enhancing the tools used by our client services staff to maximize the results of your campaign.

  • Patent-pending iSEBA® (iProspect Search Engine Bidding Agent) automated paid search advertising management tool, using predictive modeling and regression analysis, to optimize the spending of your pay per click advertising, paid inclusion and shopping feed budget(s) toward the campaign objective that you define.

  • Status as a Yahoo! Search Submit Pro Certified Ambassador - developing and managing the automated feed that ensure that your Web pages will appear within Yahoo!'s search results.

  • Ability to develop, manage and globally optimize automated shopping feeds into a multitude of shopping search engines, such as, Froogle, Yahoo! Shopping, etc.

  • iSERAS™ (iProspect Search Engine Ranking Analysis System) natural search engine optimization tool that identifies the most important website characteristics used by the major search engines when ranking search results. These findings are used by our staff to inform the natural SEO recommendations they make to you.

  • Extensive and flexible reporting (and data exporting) capabilities, for both paid and natural search engine marketing campaigns, available to you 24/7 over our client-only extranet that enables you to understand your campaign performance and identify opportunities to improve/optimize the your results.

  • Other proprietary search engine marketing technology that enables our employees to maximize the results of your campaign, including tools to enhance your universe of targeted keywords, better target link-building and directory opportunities, and perform comparisons between your site and those of your competitors.
Click below to learn more about:

Where did Search Engines Begin?

Where did Search Engines Begin?

The first search engine created was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The original intent of the name was "archives," but Unix standards required a shorter filename.

Tim Burners-Lee existed at this point, however there was no world wide web. The main way people shared data back then was via File Transfer Protocol (FTP). If you had a file you wanted to share you would set up an FTP server. If someone was interested in retrieving the data they could using an FTP client.

This process worked effectively in small groups, but the data became as much fragmented as it was collected.

The First Search Engines - Archie, Veronica and Jughead

Archie helped solve the problem of data defragmentation in transfer as it became a database of web filenames which it would match with the users' queries.

It soon started to become known and Archie had such popularity that in 1993 the University of Nevada System Computing Services group developed Veronica. Veronica served the same purpose as Archie, but it worked on plain text files.

Soon another user interface name Jughead appeared with the same purpose as Veronica; both of these were used for files sent via Gopher, which was created as an Archie alternative by Mark McCahill at the University of Minnesota in 1991.

The Web Robot or Bot used by Search Engines

Soon the webs first robot came. Matthew Gray introduced the World Wide Web Wanderer. He initially wanted to measure the growth of the web and created this bot to count active web servers. He soon upgraded the bot to capture actual URL's. His database became knows as the Wandex. The Wanderer was as much of a problem as it was a solution because it caused system lag by accessing the same page hundreds of times a day. It did not take long for him to fix this software, but people wondered if robots were a good or bad thing. Computer robots are simply programs that automate repetitive tasks at speeds impossible for humans to reproduce.The term bot on the internet is usually used to describe anything that interfaces with the user or that collects data. Search engines use "spiders" which search (or spider) the web for information. Another bot example could be Chatterbots, which are resource heavy on a specific topic. These bots attempt to act like a human and communicate with humans on said topic.In October of 1993 artijn Koster created Archie-Like Indexing of the Web, or ALIWEB in response to the Wanderer. ALIWEB allowed users to submit their pages they wanted indexed with their own page description. This meant it needed no bot to collect data and was not using up excessive bandwidth. The downside of ALIWEB is that many people did not know how to submit their site.

The Search Engine Spider

By December 1993, three full fledged bot fed search engines had surfaced on the web.

JumpStation, the World Wide Web Worm, and the Repository-Based Software Engineering (RBSE) spider. JumpStation gathered info about the title and header from Web pages and retrieved these using a simple linear search. As the web grew, JumpStation slowed to a stop. The WWW Worm indexed titles and URL's. The problem with JumpStation and the World Wide Web Worm is that they listed results in the order that they found them, and provided no discrimination. The RSBE spider did implement a ranking system.

Excite Search Engine

Excite came from the project Architext, which was started by in February 1993 by six Stanford undergrad students. They had the idea of using statistical analysis of word relationships to make searching more efficient. They were soon funded, and in mid 1993 they released copies of their search software for use on web sites.

All these links seemed somewhat irrelevant because the spiders were not intelligent enough to understand what all the links meant back then so if you did not know the exact name of what you were looking for, it was extremely hard and sometimes impossible to find. Out of necessity the EINet Galaxy web directory was born in January of 1994. It was organized similar to how web directories are today. The biggest reason the EINet Galaxy became a success was that it also contained Gopher and Telnet search features in addition to its web search feature. The truth is the web size in early 1994 did not really require a web directory; however, others soon did follow.

The birth of the Yahoo Search Engine

In April 1994 David Filo and Jerry Yang created Yahoo as a collection of their favourite web pages. As their number of links grew they had to reorganize and become a searchable directory. What set the directories above The Wanderer is that they provided a description with each URL.

Brian Pinkerton of the University of Washington released the WebCrawler on April 20, 1994. It was the first crawler which indexed entire pages. Soon it became so popular that during daytime hours it could not be used. AOL eventually purchased WebCrawler and ran it on their network. Then in 1997, Excite bought out WebCrawler, and AOL began using Excite to power its NetFind. WebCrawler opened the door for many other services to follow suit. Within 1 year of its debuted came Lycos, Infoseek, and OpenText.

Lycos Search Engine

Lycos was the next major development, having been design at Carnegie Mellon University around July of 1994. Michael Mauldin was responsible for this search engine and remains to be the chief scientist at Lycos Inc.

On July 20, 1994, Lycos went public with a catalogue of 54,000 documents. In addition to providing ranked relevance retrieval, Lycos provided prefix matching and word proximity bonuses. But Lycos' main difference was the sheer size of its catalogue: by August 1994, Lycos had identified 394,000 documents; by January 1995, the catalogue had reached 1.5 million documents; and by November 1996, Lycos had indexed over 60 million documents -- more than any other Web search engine. In October 1994, Lycos ranked first on Netscape's list of search engines by finding the most hits on the word ‘surf.'.

A Search Engine called Infoseek

Infoseek also started out in 1994, claiming to have been founded in January. They really did not bring a whole lot of innovation to the table, but they offered a few add on's, and in December 1995 they convinced Netscape to use them as their default search, which gave them major exposure. AltaVista debut online came during this same month. AltaVista brought many important features to the web scene. They had nearly unlimited bandwidth (for that time), they were the first to allow natural language queries, advanced searching techniques and they allowed users to add or delete their own URL within 24 hours. They even allowed inbound link checking. AltaVista also provided search tips.

The Looksmart Search Engine

The Looksmart directory came about in 1996.

The Inktomi Corporation came about on May 20, 1996 with its search engine Hotbot. Two Cal Berkeley cohorts created Inktomi from the improved technology gained from their research. Hotwire listed this site and it became hugely popular quickly. It has since been bought by Yahoo though.

Ask Jeeves

In April of 1997 Ask Jeevesssearch engine was launched. Northern Light was also launched in 1997.

.... and then there was Google .... the world's largest Search Engine

In 1998 the last of the current search super powers, and the most powerful to date, Google, was launched. It decided to rank pages using an important concept of implied value due to inbound links. This makes the web somewhat democratic as each off going link is a vote. Google has become so popular that major portals such as AOL and Yahoo have used Google and allowed that search technology to own the lion’s share of web searches.

Google went public at $85 a share on August 19, 2004 and its first trade was at 11:56 am ET at $100.01.

On September 30, 2004 Vivisimo launched Clusty, which was the first major search engine to offer blog and other similar content type tabs.

On October 5, 2004 Bill Gross (founder of Overture) relaunched Snap as a search engine with a completely transparent business model (showing search volumes, revenues, and advertisers). Snap has many advanced sorting features but it may be a bit more than what most searchers were looking for.

On November 10th Google opened up their Google Advertising Professional program.

On November 18th Google launched the Google Scholar search program.

On January 21, 2005 Google opened up a free cross platform ad tracking.

Geico took Google to court for trademark violation for allowing Geico to be a keyword trigger. Geico lost this US based case on December 15, 2004. Google lost a similar French trademark case against Le Meridien Hotels on December 16, 2004.

On January 18, 2005 Google, MSN, and Yahoo! announced the release of a NoFollow tag which allows blog owners to block comment spam from passing link popularity. This still will not deter spam bots though and was quickly used by many non blog sites. The Wikipedia was the first major non blog site to use the nofollow tag.

The NineMSN Search Engine

In 1998 MSN search is launched. The open directory and direct hit were also launched in 1998.

Disney released the Go Network which has lost much of its popularity since 1999. Fast releases its search technology which is thought to be the closest competitor to Google.

In 2000 the Teoma search engine was released, which uses clustering to organize sites by Subject Specific Popularity. In 2001 Ask Jeeves bought Teoma to replace the Direct Hit search engine.

In 2003 Google released a contextual based ad program by the name of AdSense which allows people like me to make revenue of the automated placement of relevant ads on my pages.

In the later months of 2003, (November 15 to be specific), Google began to heavily introduce many more semantic elements into its search product. Researchers and SEO's like me have noticed the wild changes in search relevancy, but many searchers remain clueless to the changes.

The Other Search Engines

LookSmart bought the WiseNut search engine in March of 2002. They have used their new search product to power the backend of their search. One of the largest problems with LookSmart is that their directory first mentality has hurt their relevancy.

In 2004 MSN dropped LookSmart to be powered from Inktomi. LookSmart struggles to make ends meet because they are putting the cart in front of the horse. RELEVANCY WINS DISTRIBUTION

In 2003 Overture purchased AllTheWeb and AltaVista. Yahoo gobbled up Intomi and Overture.

Yahoo in 2004 dumped Google in favour of its own in-house search engine. Yahoo! Slurp is believed to be collecting data to make a new database separate from the Inktomi database. The new Yahoo! database replaced both AltaVista and AllTheWeb in March 2004.

The latest in Search Engines

Microsoft is making a large investment into developing a new search technology and should be some cause for concern for other major search engines.

Through this course of history many smaller search engines have come and gone as the search industry has struggled to find a balance between profitability and relevancy. There are niche specific engines, meta engines, and in 1997 Overture (named GoTo back then) launched the pay per click variety.

Meta engines search multiple other engines at the same time. They figure by drawing from multiple sources they refine the results to a higher quality. The problem with meta searches is that they are usually overstuffed with advertisements. You are only as strong as your weakest link. InfoSpace powers most of the larger meta search engines.

The newest search engine concepts are web site clustering, semantics, and having industry specific smaller search engines / portals.

In 2004 Nutch and Dispie were to be launched but did not yet make huge waves.

Nutch - open sourced search engine
Dipsie - huge search engine to be which claims it will index over 10,000,000,000 documents this year.
Acoona got endorsed by Bill Clinton but out of the gate their relevancy was a bit questionable.

MSN Beta began to power a large portion of the MSN Search queries on January 20, 2005.

What is the future of Search Engines?

Some of the things that Search Engines will do in the future, include:

Image Scanning Search Engines

It is obvious that scanning images is one of the major upgrades that will soon be possible with search engines. In early 2004 they released Princeton's 3D search engine which can search for images like what you sketch.

Streaming Media Search Engines

Singingfish is already offering streaming media searches. Interesting today, but it will be boring in a few years. There is nothing exciting about connecting information to hungry minds if you are not interesting. I am not actively involved in the future of search engines, but I am interested and excited to say the very least.

Voice Recognition & Emotion Understanding

In the future computers will become more understanding of speech and applying the appropriate words to the sounds we make. Dragon Naturally Speaking already does a good job of this. Also as the information revolution is taking place it will become so that computers can be more able to understand emotion and what we are really "searching" for. Eventually advanced monitoring biofeedback monitoring equipment will aid us in discovering our true passions and what we want (what we are searching for).

Better Resources to Search Through

Currently programs like Google AdSense encourage the creation of solid content. This will improve the quality of content which search engines are able to find - currently one of their limiting factors.

Search Engines Indexing Dynamic Content

On the technical back end computers also need to be able to follow links and dynamic content with greater ease. This will require the design and implementation of ultra premium spider monitoring software. Of course the whole time everything else is occurring collecting information about the world around us and the worlds around it will only become easier. As the pool of data continues to grow so will the quality of distributed computing.

Yahoo!'s content acquisition program aims to index more dynamic content. Dipsie is to be launched in 2004 with the ability to execute form boxes.

Find out more about Google
Click here to find out what a Search Engine Optimisation is

Fun has begun on the market - HTTPool &

HTTPool enters Romanian market as a full service online advertising network, following the market trends by bringing a combined approach to the Online Industry - full service agency & online advertising network. You can read the full story on Hotnews and a more detailed analysis on Doru Panaitescu’s blog. I met HTTPool’s CEO, Timotej, in Warsaw at the InternetCEE conference last november and he is a fun guy with a sound business focus and clear regional markets understanding. I was impressed that such a young guy can lead such a big operation, but I guess this is the way in the Online Industry. It’s funny that neither of the sources that bring us the news mention anything about Timotej.

Timotej Gala, Chief Executive at Httpool Internet Marketing Central and Eastern Europe (Balkan).

Timotej Gala is responsible for media-buying and Internet advertising operations in five CEE markets. Since 2003, when he joined the company, he has successfully positioned Httpool as the market leader in online marketing in South-East Europe. Furthermore, he actively promotes Internet marketing in the region presenting the latest trends and opportunities of online marketing through media and marketing events.


[YahooLilu - Bobby Voicu (Yahoo! Romania Community Manager)

Organized by Cristian Manafu/Evensys and proudly sponsored by Microsoft, first edition of Web Club took place last Thursday in Club Colocvio and gathered more than 90% of the online industry from Romania. The joke of the evening was that if something happened at the location (airplane crash, God forbid) probably the business will freeze for a couple of years.

WebClub will consist of aprox. 10 meetings/year and access will be provided based on a membership fee. Access to new members will be granted based on recommendation from at least 2 current members. It’s a great opportunity for networking but also to get a sneak preview at exclusive presentations and news and probably in the future, a chance to meet with angel investors interested in the online business.

WebClub #1 consisted of some really interesting presentations from Vladimir Oane - about the a.Online entrepreneurship from Romania comparred to European one and b.Ubervu; the news that Calin Fusu ( agreed to invest in Vertimo, a vertical search engine for local real-estate, created by Brainient (Emi Gal); great networking.


El Cotillo is situated on the northwest corner of Fuerteventura. Cotillo's main attraction are the wonderful beaches and lagoons. The DUPLEX is located to the north of the village just infront of the the fine white sand lagoons. The closer you walk towards the lighthouse the more secluded the lagoons become. All of these lagoons are safe for young children to swim and play.

Google Map Cotillo ... click here.

Beachfront villa for rent by owner in El Cotillo Fuerteventura

The village also offers a few shops , bars, 2 banks and 3 well stocked supermarkets, and if all this isn’t enough Corralejo, the main tourist area for Fuerteventura, is only 19km 15-20 minutes drive. There is an excellent and a reliable bus service, which is inexpensive and a great way of getting about Fuerteventura. Cotillo has fantastic beaches and lagoons that lie to the south of this little harbour town. From the cliffs there are several footpaths that lead down to the Playa del Castillo, one of the nicest beaches in this region.

Click here for rates & availability

Cotillo Villas and apartments for rent by owner