Best way to Submitting your URL to Google⇔ can be attained on the page. More you need to understand what happens on Google Submission on the go.
Google Submission – Submitting your URL to Google
Google is primarily a fully-automatic search engine ⇔ with no human-intervention involved in the search process. It utilizes robots known as ‘spiders’ to crawl the web on a regular basis for new updates and new websites to be included in the Google Index.
This robot software follows hyperlinks from site to site. Google does not require that you should submit your URL to its database for inclusion in the index, as it is done anyway automatically by the ‘spiders’.
However, manual submission of URL can be done by going to the Google website and clicking the related link. One important thing here is that Google does not accept payment of any sort for site submission or improving page rank of your website. Also, submitting your site through the Google website does not guarantee listing in the index.
Cloaking – ” Google Submission “
Sometimes, a webmaster might program the server in such a way that it returns different content to Google than it returns to regular users, which is often done to misrepresent search engine rankings. This process is referred to as cloaking as it conceals the actual website and returns distorted web pages to search engines crawling the site. This can mislead users about what they’ll find when they click on a search result.
Google highly disapproves of any such practice and might place a ban on the website which is found guilty of cloaking.
Here are some of the important tips and tricks that can be employed while dealing with Google.
Google Submission – Do’s
• A website should have crystal clear hierarchy and links and should preferably be easy to navigate.
• A site map is required to help the users go around your site and in case the site map has more than 100 links, then it is advisable to break it into several pages to avoid clutter.
• Come up with essential and precise keywords and make sure that your website features relevant and informative content.
• The Google crawler will not recognize text hidden in the images, so when describing important names, keywords or links; stick with plain text.
• The TITLE and ALT tags should be descriptive and accurate and the website should have no broken links or incorrect HTML.
• Dynamic pages (the URL consisting of a ‘?’ character) should be kept to a minimum as not every search engine spider is able to crawl them.
• The robots.txt file on your web server should be current and should not block the Googlebot crawler. This file tells crawlers which directories can or cannot be crawled.
Google Submission Don’ts
• When making a site, do not cheat your users, i.e. those people who will surf your website. Do not provide them with irrelevant content or present them with any fraudulent schemes.
• Avoid tricks or link schemes designed to increase your site’s ranking.
• Do not employ hidden texts or hidden links.
• Google frowns upon websites using cloaking technique. Hence, it is advisable to avoid that.
• Automated queries should not be sent to Google.
• Avoid stuffing pages with irrelevant words and content. Also don’t create multiple pages, sub-domains, or domains with significantly duplicate content.
• Avoid “doorway” pages created just for search engines or other “cookie cutter” approaches such as affiliate programs with hardly any original content.
Google Services and Answers
Google Answers is an interesting cross between ‘online marketplace’ and probably a ‘virtual classroom’. Those who wish to participate must register with Google Answers. Here, the researchers who have considerable expertise in online researching provide answers to the queries posted by other users for a fee.
When a user posts a question, he or she also needs to mention the price the user is willing to pay in case the question is answered. When the question is answered by any user, then the payment is made accordingly to the user answering the question.
Moreover, the questions and the discussion that ensues will be publicly view-able and other registered users can also share their opinions and insights.
There is a non-refundable listing fee of $0.50 per question plus an additional ‘price’ you set for your question that reflects how much you’re willing to pay for an answer.
Three-quarters of your question price goes directly to the Researcher who answers your question; the remaining 25 percent goes to Google to support the service.
Google Groups is an online discussion forum and it contains the entire archive of Usenet discussion groups dating back to 1981. These discussions cover the full range of human dissertation and present a fascinating look at evolving viewpoints, debate and advice on every subject from politics to technology.
Users can access all of this information all in a database that contains more than 800 million posts by using the search feature of Google.
Google’s Image Search
Google offers a wide collection of images from around the web; its comprehensive database consists of more than 425 million images. All a user has to do is to enter a query in the image search box, then click on the “Search” button.
On the results page, by clicking the thumbnail a larger version of the image can be seen, as well as the web page on which the image is located.
By default, Google’s Image Search uses its mature content filter on the initial search by any user. The filter removes many adult images but it cannot guarantee that all such content will be filtered out. It is not possible to ensure with 100% accuracy that all mature content will be removed from image search results using filters.
Google analyzes the text on the page near the image, the image caption and dozens of other factors which enables it to determine the image content.
Google also utilizes several sophisticated algorithms which make it possible to remove duplicates and it in turn ensures that the highest quality images are presented first in the results. Google’s Image search supports all the complex search strategies like Boolean operators, etc.
Google’s Catalog Search
Google offers a unique service in the form of its Catalog Search. Google’s Catalog Search has made it easy to find information published in mail-order catalogs that were not previously available online. It includes the full content of hundreds of mail-order catalogs selling everything from industrial adhesives to clothing and home furnishings.
Google’s Catalog Search can help you if you are looking to buy for either yourself or for your business.
The printed copies of catalogs are scanned and the text portion is converted into a format which makes it easy for users to search for the catalog. The same sophisticated algorithm employed by the Google Web Search is then employed to search for catalogs.
This makes sure that most recent and relevant catalogs are displayed. Google is not associated with any catalog vendors and is not liable for any misuse of this service on part of the users.
The word ‘froogle’ is a combination of the word ‘frugal’ which means ‘pennywise’ or ‘economical’ and of course ‘Google’. Currently in its beta version, or testing format, Froogle is a recent concept put forth by Google.
Google’s spidering software crawls the web looking for information about products for sale online. It does so by focusing entirely on product search and applying the power of Google’s search technology to locate stores that sell items you want and consequently pointing you to that specific store.
Just like the Google Web Search, Froogle also ranks store sites based only on their relevance to the search terms entered by the users. Google does not accept payment for placement within their actual search results.
Froogle also includes product information submitted electronically by merchants. Its search results are automatically generated by Google’s ranking software.
AltaVista has an index that is built by sending out a crawler (a robot program) that captures text and brings it back. The main crawler is called “Scooter.” Scooter sends out thousands of threads simultaneously.
24 hours a day, 7 days a week, Scooter and its cousins access thousands of pages at a time, like thousands of blind users grabbing text, pulling it back, throwing it into the indexing machines so the next day that text can be in the index. And at the same time, they pull off, from all those pages, every hyperlink that they find, to put in a list of where to go to next.
In a typical day Scooter and its cousins visit over 10 million pages. If there are a lot of hyperlinks from other pages to yours, that increases your chances of being found. But if this is your own personal site, or if this is a brand new Web page, that’s not too likely.
More on Google Submission …
AltaVista has in incredibly large database of Web sites, such that searches often return hundreds of thousands of Web site matches. AltaVista’s spider goes down about three pages into your site.
This is important to remember if you have different topical pages that won’t be found within three clicks of the main page. You will have to index them separately.
You cannot tell Alta Vista how to index your site, it is all done via their spider, but you can go to their site and give the spider a nudge by submitting specific pages. That way, AltaVista’s spider knows to visit that page and index it.
Once you have done that, it’s all up to your META tags and your page’s content! AltaVista’s spider may revisit your site each month after its initial visit.
AltaVista ranking algorithms reward keywords in the <TITLE> tag. If a keyword is not in a title tag, it will likely not appear anywhere near the top of the search results! AltaVista also rewards keywords near one another, and keywords near the beginning of a page
Add a Page
Search results on AltaVista are powered by Yahoo! Search Technology. For fast submission to the Yahoo! Search index via the Yahoo! Search Marketing Search Submit program. If you give it a URL for a page that doesn’t exist, it will come back with Error 404, which means there is no such page. If that page was in the index, it will remove that page from the index the next day.
Also, consider technical factors. If a site has a slow connection, it might time-out for the crawler. Very complex pages, too, may time out before the crawler can harvest the text.
If you have a hierarchy of directories at your site, put the most important information high, not deep. Some search engines will presume that the higher you placed the information, the more important it is. And crawlers may not venture deeper than three or four or five directory levels.
Above all remember the obvious – full-text search engines such index text.
You may well be tempted to use fancy and expensive design techniques that either block search engine crawlers or leave your pages with very little plain text that can be indexed. Don’t fall prey to that temptation.
Ranking Rules Of Thumb
The simple rule of thumb is that content counts, and that content near the top of a page counts for more than content at the end. In particular, the HTML title and the first couple lines of text are the most important part of your pages.
If the words and phrases that match a query happen to appear in the HTML title or first couple lines of text of one of your pages, chances are very good that that page will appear high in the list of search results.
A crawler/spider search engine can base its ranking on both static factors (a computation of the value of page independent of any particular query) and query-dependent factors.
Long pages, which are rich in meaningful text (not randomly generated letters and words).
Pages that serve as good hubs, with lots of links to pages that that have related content (topic similarity, rather than random meaningless links, such as those generated by link exchange programs or intended to generate a false impression of “popularity”).
The connectivity of pages, including not just how many links there are to a page but where the links come from: the number of distinct domains and the “quality” ranking of those particular sites. This is calculated for the site and also for individual pages. A site or a page is “good” if many pages at many different sites point to it, and especially if many “good” sites point to it.
The level of the directory in which the page is found. Higher is considered more important. If a page is buried too deep, and the
crawler simply won’t go that far and will never find it.
These static factors are recomputed about once a week, and new good pages slowly percolate upward in the rankings. Note that there are advantages to having a simple address and sticking to it, so others can build links to it, and so you know that it’s in the index
Query-dependent factors include:
The HTML title.
The first lines of text.
Query words and phrases appearing early in a page rather than late.
Meta tags, which are treated as ordinary words in the text, but like words that appear early in the text (unless the meta tags are patently unrelated to the content on the page itself, in which case the page will be penalized)
Words mentioned in the “anchor” text associated with hyperlinks to your pages. (E.g., if lots of good sites link to your site with anchor text “breast cancer” and the query is “breast cancer,” chances are good that you will appear high in the list of matches.)
Blanket policy on doorway pages and cloaking
Many search engines are opposed to doorway pages and cloaking. they considers doorway and cloaked pages to be spam and encourages people to use other avenues to increase the relevancy of their pages. A description of doorway pages and cloaking is given later on in this guide.
Meta tags (Ask.com as an Example)
Though Meta tags are indexed and considered to be regular text, Ask.com claims it doesn’t give them priority over HTML titles and other text. Though you should use meta tags in all your pages, some webmasters claim their doorway pages for Ask.com a rank better when they don’t use them.
In Summary …
To do this Google Submission:
- log into Google Search Console ⇔ and select the property you have currently listed with Google.
- Along the left hand sidebar, select Crawl > Fetch as Google, as shown below:
- This will pull up a table where you can enter a URL path following your domain name, and “Fetch” this particular web-page on your website
If you do use Meta tags, make your description tag no more than 150 characters and your keywords tag no more than 1,024 characters long. Read More ⇔