Veda-VisionGPT is an AI-powered platform enabling seamless multilingual document processing and interaction, supporting 20+ Indian languages, including Sanskrit. It offers innovative features such as document translation
See MoreCongratulations bro ๐
Even google translate and quillbot can do that , so what's diff ? Don't say it processes 20+ indian languages , any other can do that.so what else?
Well chatgpt and deepseek free models can do it very well , because I tried and it works wonders , I can ask about specific part to translate and also ask followup questions ... Even if it's so better most , I mean almost 90% users will choose the most accessible, one they already know , free especially, and the case you told about 50+ pages pdf , tell me how many in whole word would need that ? I maybe sounding like critic , ig it's okay because you need good critics inorder to build godd roducts
Try to translate a pdf of 50+ pages on quilbot and Google translate . You won't be able to do that. Also you can't download the translated or extracted data. In our platform you can .
Some of the old documents in kannada can't be read if u have a solution for that it's very useful and also transaltion of a kannada document to English is also useful.. even google translator fail to translate
Yeah we have that feature in our platform. Also translation accuracy is good.
So your vision is to provide everything in any language ..? Clarify your idea
Veda-VisionGPT the name is taken from the working technique of OCR , here the veda stands for our regional languages and vision stands for OCR .
Oh that's really a great idea, go for it brother Btw what's the meaning of the name Veda GPT
So basically we are providing extraction of text from all types of documents including non machine readable documents for Indian languages. You can also translate the document text as per your choice in your own native language (Indian). We also have smart interaction system where you can question your documents.
Funding is the lifeblood of startups, and NSVP is poised to become a catalyst for turning visionary ideas into reality. The venture firm is not merely an investor; it's a partner in the journey of transforming concepts into successful enterprises. The funding provided by NSVP goes beyond monetary supportโit is an endorsement of the belief that great ideas deserve to be nurtured Sincerely, Nimesh Parekh Nsvp Nsvp.co.in@gmail.com
Thank you for your suggestion, we'll keep in touch thanks ๐
All the best
Thank you ๐
How is your idea different from NotebookLm
Hey thank you for your comment. Notebook Lm primarily provides academic usecases and works with only single language that is English, also it doesn't have all types of document support. Whereas our platform focuses on all types of Documents including which are not readable by machines. Other than this Veda-VisionGPT provides you an opportunity to translate the documents into your very own language which helps people to understand the content easily. So basically our platform is focused to provide optimizable solutions for document processing for multiple languages in India and targets mostly Government and Legal areas where need for such tool is high.
Handling over 20 Indian languages, each with unique grammar and vocabulary, requires immense computational power and a massive dataset for each language Additionally, the system aims to perform multiple tasks like translation, summarization, and question-answering, which increases complexity Finally, ensuring accuracy and reliability across all languages demands a vast pool of linguistic experts, which can be difficult to assemble and maintain
Respected sir your point is right although we have already done with our working prototype and looking forward to increase the accuracy. In short we already have handled the context of grammatical differences as well as the complexity. And the last point, the computational power required is optimised perfectly via high scalability techniques. We already have those trained and tested , reliable models for all the languages.
Hey, Amazing Product! I am Interested to know ,how do you handle semantic accuracy and context in low-resource languages like Sanskrit, especially for complex or ambiguous documents? Are you using specific techniques like transfer learning for this?
Thank you ๐
Hey , thank you ๐ We are using standard RAG system. Other than that we also have some preprocessing steps while extraction of text as well as Q and A. We focus on context accurate results for which we process text and generate contextual embeddings which eases the retrieval phase of the model.
Download the medial app to read full posts, comements and news.