Want to learn more about Bing and search rankings? Here is!

A lot of people are silent when it comes to SEO for Bing because there isn’t a lot of information out there. The funny thing is that a lot of cutting-edge technologies and techniques were used at Bing before Google. Fabrice Canel, Senior Program Director at Bing, recently shared a wealth of information with Jason Barnard of Kalicube on how not only Bing works, but also how search engines work in general.

Content indexing criteria

Fabrice is in charge of Bingbot Crawler, URL discovery and selection, document processing and Bing webmaster tools. He is a good person to turn to for search engine information, especially page crawling and selection.

Fabrice describes the crawling process here and what I think is the important takeaway is how he says Bing is picky about what it chooses to index.

A lot of people think that every page on their site deserves a chance to rank. But Google and Bing don’t index everything.


Continue reading below

They tend to leave certain types of pages behind.

The first characteristic of a page that Bing would like to index is a useful page.

Screenshot by Jason Barnard

Fabrice Canel explains:

“We are obviously business-oriented to satisfy the end customer, but we have to choose.

We can’t explore everything on the Internet, there are an endless number of URLs.

You have pages with calendars. You can go to tomorrow forever.

So it’s really about finding what is most useful to satisfy a Microsoft Bing customer.


Continue reading below

Bing and key domains

Fabrice then talks about the concept of Key Domains and how they are guided by key pages on the internet to show them quality content.

This type of looks like an algorithm that incorporates a seed set of trusted sites from which the further a site is from key websites, the more likely it is to be spam or unnecessary (Link Distance Ranking Algorithms)

I don’t want to put words in Fabrice’s mouth, what precedes is only my observation.

I’ll let Fabrice speak for himself.

Jason asked:

“Would you say that most of the content on the web isn’t useful or is that overkill? “

Fabrice replied:

“I think that’s a bit of a stretch.

We are guided by key pages that are important on the internet and we follow links to understand the rest.

And if we really focus on those key areas (key pages), it guides us to great content.

So our view of the Internet is not to go deep forever and explore unnecessary content.

This is obviously to keep the index up to date and complete, containing all of the most relevant content on the web.

What Makes Bing Crawl Deep Into Websites

Jason then asks questions about the websites which are crawled in depth. Obviously, getting a search engine to index all the pages on a site is important.

Fabrice explains the process.

Jason asked:

“Right. And then I think that’s the key. You’d rather go wide and go deep.

So if I have a site at the top of the pile, you’ll tend to focus more on me than trying to find new things that you don’t already know? “

Fabrice provided a nuanced response, reflecting the complicated nature of what is chosen for crawling and indexing:

“It depends. If you have a site that specializes and covers an interesting topic that interests the customer, we can obviously dig deeper.

Machines choose what to explore

We sometimes anthropomorphize search engines by saying things like “The search engine doesn’t like my site.

But in reality, there is nothing in algorithms that talks about loving or trusting.


Continue reading below

The machines do not Like.

The machines do not confidence.

Search engines are machines that are basically programmed with goals.

Fabrice explains how Bing chooses to explore in depth or not to explore in depth:

“I am not the one who chooses where we go in depth and not in depth. It’s not my team either.

It’s the machine.

Machine learning, which is choosing to dig deeper or deeper into what we think is important to a Bing customer.

That part of what’s important to the customer is something to consider. The search engine, in this case Bing, is set to identify pages that are important to customers.

When writing an article or even creating an ecommerce page, it can be helpful to look at the page and ask, “How can I make this page important to those who visit this web page?” “

Jason then asked a question to get more information on what is involved in selecting what is important to site visitors.


Continue reading below

Jason asked:

“Are you just giving the machine the goals you want it to achieve?” “

Fabrice replied:

“Absolutely yes.

The main contribution we give to machine learning algorithms is to satisfy Bing customers.

And so we are looking at different dimensions to satisfy Bing customers.

Again, if you are looking for Facebook. You want the Facebook link in the first position. You don’t want random blogs talking about Facebook.

Search crawl is suspended and needs an update

Jason asks Fabrice why IndexNow is useful.

Fabrice responds by specifying what the crawl is today and how this method of searching for content to be indexed, which dates back almost thirty years, needs updating.

The old and current method of exploring is to visit the website and “shoot”Website data, even if the web pages are the same and have not changed.

Search engines should continue to visit the entire indexed web to see if any new pages, phrases, or links have been added.


Continue reading below

Fabrice says the way search engines crawl on websites needs to change because there is a better way to go about it.

He explained the fundamental problem:

“So the model of exploration is really learning, trying to figure out when things change.

When will Jason post again? Maybe we can model it. Maybe we can try to figure it out. But we really don’t know.

So what we do is pull, pull, crawl, and crawl to see if anything has changed.

It’s a crawl pattern today. We can learn links, but at the end of the day we go to the homepage and find out. This model must therefore change.

Fabrice then explained the solution:

“We need to get feedback from the website owner Jason and Jason can tell us through a simple API that the website content has changed, helping us discover that change – to be notified of a change, send the crawler and get the latest content.

It’s a global change of the industry from exploration and exploration and exploration and exploration to find out if anything has changed… ”


Continue reading below

The current state of research

Google tends to call them users, the people who use their site. Bing introduces the concept of people looking as customers and with that all the little aphorisms about customers that are implicit in a customer-first approach such that the customer is always right, gives the customer what he wants.

Steve Jobs said about clients versus innovation, which relates a bit to Bing’s IndexNow but also for publishers:

“You can’t just ask customers what they want and then try to give it to them. By the time you get it built, they’ll want something new.

The future of research is Push?

Bing has rolled out a new push technology called IndexNow. It is a way for publishers to notify search engines to come and explore new or updated web pages. This saves hosting and data center resources in the form of electrical power and bandwidth. It also makes it easier for publishers to know that the search engine will pick up content sooner with a push method rather than later as in the current crawl method.


Continue reading below


This is only part of what has been discussed.

Find the full interview with Fabrice Canel