提供支持JavaScript的易于搜索的网站(Google I / O' 18)



签出新的JavaScript SEO系列→https://bit.ly/2UeQ8Do

要打造一个吸引用户但在Google搜索中表现出色的网站,则需要结合特定的服务器和客户端技术。了解使用JavaScript框架构建和部署可索引网站和Web应用程序的最佳做法。无论您是使用Angular,Polymer,React或其他框架来构建完整的PWA,还是仅将它们用于站点的某些部分,都可以使用大多数现代设置来实施本次会议的经验教训。会议还将讨论易于搜索的设计模式和SEO最佳做法。

通过在I / O网站上登录→https://goo.gl/HgVRhG对该会话进行评分

在此处观看来自I / O ’18的更多网站管理员会议→https://goo.gl/WNZuwe
在此处查看来自Google I / O ’18的所有会话→https://goo.gl/q1Tr8x

订阅Chrome开发者频道→http://goo.gl/LLLNvf

#io18。

22 comments
  1. This presentation just begs the same question over and over: why not make Googlebot better? Shifting the burden onto all these web developers… or just improve Googlebot to handle modern practices? Oh, your indexing bot doesn't know how to read/index pages that a human can reason about? Sounds like your bot could be improved. It uses Chrome 41—why? etc.

    Don't get me wrong, I think web developers should do all they can to improve SEO (especially with JSON-LD structured data), but some of these limitations of Googlebot are just annoying.

  2. On my website I use the fragment #! and it is perfect for the users, I show the content without refreshing the whole page. But now Google does not recommend this and my site has fallen in terms of indexed pages and therefore its positioning too.

    I do not understand why they do not take the content that comes after #!. Google always recommends focusing on users when the site is done, but this is no longer the case. Since in my case the site works perfect for users, they see the content, but now for Google this is insignificant and if now I have to change something from my code it is for Google to interpret it. Contradictory, no?

    Anyway in search engines like Bing or duckduckgo this does not happen, there if they crawl all the content of my web. They say to make use of the API History, which I was trying to do and I can not make it work for my case.

    So, do we focus on users or search engines?

  3. You're missing a 'b' in a part of the info 🙂
    "Watch more >Wemasters< sessions from I/O '18 here"
    Just trying to help, keep being awesome and an inspiration! 🙂

    Awesome video <3

  4. Questions: You mention using the mobile friendly tool and the rich results testing tool as rendering test platforms, essentially. Why do this instead of using Fetch and Render in Search Console? In fact the first time I tried to use the rich results tool it told me that the page was not eligible for "rich results known by this test."

  5. The dynamic rendering is so ridiculous…. What make you think that I'm going to code like a #$%#! just to make your job simplier when implement that, requires an important infrstructure? Google many times does incredible things, but this…. this goes nowhere. I really don't think people are going to implement this, or if they try, they are going to leave it after try…..

  6. How to make sure that Google will not consider Dynamic Rendering as a Cloaking? Previously there was a recommendation to not checking for a google bot.

  7. I have a question: we work on an brand new site, which build on JS. We close it by robots.txt as we afraid that bot might index a lot of "empty" pages, that are without dinamic redenring… However, i am want to test and to see – how google bot will see those pages? But I can't test it until I unblock the robots.txt file, right? I mean – I even can't use GWT's "Fetch as google bot" while it is closed by robots.txt. So, what might be the solution to check how google bot will render my sites without openeing robots.txt file?

  8. 23:55 provides a solution of implementing server side rendering only for google bot. That might be a good solution, however, I thought that is considered as search engine cloaking (providing different result to users / bots), which will penalize your SEO… isn't it?

Comments are closed.