google-site-verification: googlec7193c3de77668c9.html

Perplexity AI: the answer engine with a lot of question marks

In the coming weeks, Reddit will start blocking most automated bots from accessing its public data. You’ll need to make a licensing deal, like Google and OpenAI have done, to use Reddit content for model training and other commercial purposes. 

While this has technically been Reddit’s policy already, the company is now enforcing it by updating its robots.txt file, a core part of the web that dictates how web crawlers are allowed to access a site. “It’s a signal to those who don’t have an agreement with us that they shouldn’t be accessing Reddit data,” the company’s chief legal officer, Ben Lee, tells me. “It’s also a signal to bad actors that the word ‘allow’ in robots.txt doesn’t mean, and has never meant, that they can use the data however they want.”

Views: 0

Advertisements

Check Also

Elon Musk takes the stand in high-profile trial against OpenAI

The three were on the initial founding team of OpenAI, with Musk investing up to …

‘The House of the Spirits’: When to Watch the New TV Adaptation on Prime Video

Blending history with the supernatural, best-selling novel The House of the Spirits launched the literary …

Can cops use digital dragnets to track you?

A years-old bank heist may soon have major privacy implications for every American who owns …

Leave a Reply

Available for Amazon Prime