Open
Conversation
… seeding data for testing next
…uide in drizzle folder
burtonjong
reviewed
Feb 25, 2026
burtonjong
reviewed
Feb 25, 2026
burtonjong
reviewed
Feb 25, 2026
burtonjong
reviewed
Feb 25, 2026
Contributor
burtonjong
approved these changes
Mar 13, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Added the ability to seed more round test data, with the following judging criteria: Innovation, Technical Complexity, Design, and Presentation.
One problem I faced was that better auth's generateId() function wasn't working on my seed file. I replaced it with crypto.randomUUID as a placeholder, so I might need to update later.
Frontend show scores component is just the same as the other two tables that connects to a different API endpoint. The API endpoint gets the data already sorted. Had to use some hard typing for the return values because I could not get a proper type to work. This works, but we may need to change it later for cleaner code.
The score is calculated by adding up all the judging rounds for now. I'm assuming this is how we would do it on the day of, but it can be changed if there are different things we need to consider (like a team not doing all rounds, so applying normalization and such, etc)
Anyway, hopefully this is good :)