A Web crawler is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing. A Web crawler may also be called a Web spider, an ant, an automatic indexer, or (in the FOAF software context) a Web scutter.
So it's hard to identify how the algorithm works especially if you don't use their analytics system. For instance, if I use Google Analytics, but I don't use Bing's solution, how does Bing's Algorithm take the traffic into account?
Web Robots (also known as Web Wanderers, Crawlers, or Spiders), are programs that traverse the Web automatically. Search engines such as Google use them to index the web content, spammers use them to scan for email addresses, and they have many other uses.