Google has reportedly removed much of Twitter's links from its search results after the social network's owner Elon Musk announced reading tweets would be limited.Search Engine Roundtable found that Google had removed 52% of Twitter links since the crackdown began last week. Twitter now blocks users...
Google has reportedly removed much of Twitter's links from its search results after the social network's owner Elon Musk announced reading tweets would be limited.
Search Engine Roundtable found that Google had removed 52% of Twitter links since the crackdown began last week. Twitter now blocks users who are not logged in and sets limits on reading tweets.
According to Barry Schwartz, Google reported 471 million Twitter URLs as of Friday. But by Monday morning, that number had plummeted to 227 million.
"For normal indexing of these Twitter URLs, it seems like these tweets are dropping out of the sky," Schwartz wrote.
Platformer reported last month that Twitter refused to pay its bill for Google Cloud services.
I feel like Google is going to have to find a way to effectively index federated content at some point. The only way to really get human information is from sites like Reddit and Twitter. And both of those platforms seem to be dedicated to completely imploding at the moment.
Fuck Google, if Lemmy continues to take off we can just develop better search tools within the fediverse. The wider internet has been colonized, the path forward cannot rely on big tech corporations.
I'm not a programmer/developer so I don't even understand the scale of the work that has yet to be done. But I am deeply committed to upsetting the status quo, and this platform feels distinctly revolutionary. Can't wait to see what the future holds for Lemmy.
It's all well and good to have a revolution, but if nobody knows you're having one then nothing really changes. There are still benefits to centralised services, one of which being scale. To effectively index so much data you need scale, which is why smaller search engines tend to be just white labels of things like Bing.
100k people isn't nobody. Centralized services can be useful at times, but there is no fundamental law preventing a decentralized system from providing the same functionality.
The value of indexing data drops drastically when much of that data is junk, as is the case in the wider internet. Because Lemmy is a federation, there is a built in system to filter the junk.
There's nothing about the content being federated that makes it hard or impossible to index. Each instance is just a website with a public webpage that a bot can read. That all a search engine needs to index it. The worst case scenario is the bot will find the same content on multiple instances.
I did read that the website is loaded entirely through JavaScript and that maybe the Google bot doesn't execute JavaScript so can't see the text. I don't know if that's still a problem in 2023, though.
This article says it's not a problem, but I didn't read past the tl;dr, so maybe there's a caveat. Like maybe it has to use a popular framework like React or something to work.
Googlebot does execute Javascript, but since rendering JS needs much more resources, JS crawling will happen significantly less then simple http crawling. That's why all big sites still return server side rendered content.
duckduckgo (who uses Microsoft's index I believe) is able to find Lemmy instances already.
problem is since every instance has its own domain you cannot search all of Lemmy or the more obscure fediverse. lemmy.world, beehaw.org, programming.dev are all different "websites".
I append "reddit" to my query when I want to search reddit for a human answer to a question. Can't do that with Lemmy, unless the instance is branded as Lemmy.
Unless there will be an org or volunteers that indexes federated instances and makes them available to search engines to they can be differentiated, finding stuff in the fediverse might be difficult...
Maybe, I'm a bit more optimistic though. I think even if they just did something like a read only service that pulls from other federated sources like their web crawlers do for regular sites they would basically be done.
The only concern there would be people trying to block them like everyone has been doing to Meta.
yes, they have web scrapers that auto index according to the sites robot.txt, you can see what twitter asks to allow scraped by visiting here
that being said some sites would be prioritized over others, so it's possible that they just deprioritized twitter on it since it's now not as friendly for them. But the current rules are super strict as well so it could just be self imposed
Just put ‘site:lemmy.world’ into Google to see what it has indexed on that instance for example. I don’t think Lemmy is optimised for search yet, but I saw some GitHub threads around the topic.