We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.
But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the “trustworthyness” of various web sources ?
I imagine it’s the same as Google search.
Pretty much, same question can be answered with ‘how can anyone trust the search results that come up on google?’ the answer is you can’t, which is why AI shows you the sources it got the info from and you can decide for yourself
This place sounds like old people, did you know wikipedia can be edited by anyone? 😱
Why is this downvoted?
It’s the right response, the top link is giving creditability through a ranking algorithm and is not guaranteed to have the right info. An LLM is trained on large corpus of (hopefully) quality data, but may not return the right information. Both may lead you to the wrong results and it’s always been the users responsibility to verify information.
The only major difference between search and an LLM is that the LLM believes it knows the answer and search just tells you “this is the most relevant thing I could find”.