thank you JohnAziz , I'm really thankful for your answer. Just one follow-up question: almost all the examples show indexing data from a blob storage with many files. I believe this is because the cognitive search can vectorise only from a blob or Mongo.
What about real life, where instead of having documents stored on a blob, we have them stored as articles in an SQL database and indexed inside of Azure search (we use it as the search engine for our website)? What should we do to optimise it the best possible for RAG and OpenAI:
1. From your experience, how bad is just pure Azure search
2. Would adding the semantic layer significantly return more relevant bits of the document
3. Or should we store our articles on a blob and index them with the cognitive index+vector+semantic layer?
Ideally, I would like to skip the 3rd step, but we also want to get the best possible OpenAI app for our own data. So all this is in light of RAG to get the most relevant bits of the document to be passed to OpenAI
thank you!