提供支持JavaScript的易於搜索的網站(Google I / O' 18)



簽出新的JavaScript SEO系列→https://bit.ly/2UeQ8Do

要打造一個吸引用戶但在Google搜索中表現出色的網站,則需要結合特定的伺服器和客戶端技術。了解使用JavaScript框架構建和部署可索引網站和Web應用程序的最佳做法。無論您是使用Angular,Polymer,React或其他框架來構建完整的PWA,還是僅將它們用於站點的某些部分,都可以使用大多數現代設置來實施本次會議的經驗教訓。會議還將討論易於搜索的設計模式和SEO最佳做法。

通過在I / O網站上登錄→https://goo.gl/HgVRhG對該會話進行評分

在此處觀看來自I / O ’18的更多網站管理員會議→https://goo.gl/WNZuwe
在此處查看來自Google I / O ’18的所有會話→https://goo.gl/q1Tr8x

訂閱Chrome開發者頻道→http://goo.gl/LLLNvf

#io18。

22 comments
  1. This presentation just begs the same question over and over: why not make Googlebot better? Shifting the burden onto all these web developers… or just improve Googlebot to handle modern practices? Oh, your indexing bot doesn't know how to read/index pages that a human can reason about? Sounds like your bot could be improved. It uses Chrome 41—why? etc.

    Don't get me wrong, I think web developers should do all they can to improve SEO (especially with JSON-LD structured data), but some of these limitations of Googlebot are just annoying.

  2. On my website I use the fragment #! and it is perfect for the users, I show the content without refreshing the whole page. But now Google does not recommend this and my site has fallen in terms of indexed pages and therefore its positioning too.

    I do not understand why they do not take the content that comes after #!. Google always recommends focusing on users when the site is done, but this is no longer the case. Since in my case the site works perfect for users, they see the content, but now for Google this is insignificant and if now I have to change something from my code it is for Google to interpret it. Contradictory, no?

    Anyway in search engines like Bing or duckduckgo this does not happen, there if they crawl all the content of my web. They say to make use of the API History, which I was trying to do and I can not make it work for my case.

    So, do we focus on users or search engines?

  3. You're missing a 'b' in a part of the info 🙂
    "Watch more >Wemasters< sessions from I/O '18 here"
    Just trying to help, keep being awesome and an inspiration! 🙂

    Awesome video <3

  4. Questions: You mention using the mobile friendly tool and the rich results testing tool as rendering test platforms, essentially. Why do this instead of using Fetch and Render in Search Console? In fact the first time I tried to use the rich results tool it told me that the page was not eligible for "rich results known by this test."

  5. The dynamic rendering is so ridiculous…. What make you think that I'm going to code like a #$%#! just to make your job simplier when implement that, requires an important infrstructure? Google many times does incredible things, but this…. this goes nowhere. I really don't think people are going to implement this, or if they try, they are going to leave it after try…..

  6. How to make sure that Google will not consider Dynamic Rendering as a Cloaking? Previously there was a recommendation to not checking for a google bot.

  7. I have a question: we work on an brand new site, which build on JS. We close it by robots.txt as we afraid that bot might index a lot of "empty" pages, that are without dinamic redenring… However, i am want to test and to see – how google bot will see those pages? But I can't test it until I unblock the robots.txt file, right? I mean – I even can't use GWT's "Fetch as google bot" while it is closed by robots.txt. So, what might be the solution to check how google bot will render my sites without openeing robots.txt file?

  8. 23:55 provides a solution of implementing server side rendering only for google bot. That might be a good solution, however, I thought that is considered as search engine cloaking (providing different result to users / bots), which will penalize your SEO… isn't it?

Comments are closed.