Inspiration

We wanted to solve the problem of reading something long for class, only to forget half of what you read because of length, or simply not paying enough attention.

What it does

Takes in your reading, splits it up into smaller "chunks" and quizzes you on each chunk of the reading, forcing you to recall what you did, using Automatic Question Generation (AQG). The questions are based on Bloom's Taxonomy levels - the questions adapt dynamically to the accuracy of the user's responses. It currently makes use of Gemini's API, which was originally chosen for its wide range of access to data: it should be able to generate comprehensive questions for a huge variety of texts, no matter their subject.

How we built it

Using Flask, we connected the Gemini API to our Python code, and used a PDF parser. The Python code sends Gemini a chunk from the PDF, and a predetermined prompt, and receives a structured question/answer options/correct answer/explanation dataset. Some basic HTML & JavaScript was used to demonstrate the project on a web page using Render. This was done in part with the help of generative AI.

Challenges we ran into

Debugging - as we had no experience with HTML or JavaScript, it was at times very difficult to understand what exactly was going on. It was also difficult to work with the API, it took almost a day just to fine-tune prompts and code to make sure we were receiving the proper response and dealing with it properly. We did not have much knowledge of the NLP world, and we simply chose an easy LLM API to work with -- only later did we realize there were many better projects we could have used for this task, which would have helped steer us in the right direction as well.

Accomplishments that we're proud of

Developing a proper front-end design, and learning how to integrate and use an LLM API through Python!

What we learned

We learned a lot about front-end design, and we learned a lot about the different aspects of NLP research -- though it wasn't very well integrated into the project.

What's next for Reading quizzer

Implementing more accurate NLP infrastructure into the project (as of right now, it depends on Gemini API for simplicity), for example - spaCy. From a design perspective, a pre- or post- reading quiz could be added, variable question count, chunk sizing fine-tuning (potentially even give the user the option to make the chunks longer/shorter).

You can see some images below - of the functional website, vs the Figma prototype design. We decided to indicate the Bloom's Taxonomy level above the questions.

Share this project:

Updates