Your candidate, your coding challenges, our evaluation. Automated code reviews of your candidates. Designed for what matters in the real world - writing clean code. Currently in private beta. is an AI code evaluation assistant, the first in the world to evaluate for clean code. And that too, for any coding challenge, solved by your candidates. Codu looks for things that matter while writing production quality code - parameters that have not been automated. Until now.

We are launching an invite-only private beta. Apply here if you want access, and we'll get back.

Codu features

1. Send your test and check for correctness

You can send your test via Codu and get it evaluated automatically.

Does your tech team spend time trying to get a candidate's code running and checking whether they covered all test cases? Well, no more. Codu can check output for any coding test you have. Just give us a config file with expected input and output and Codu will automatically test it for you.

2. Codu evaluates for clean code and recommends candidates

Our machine learning model predicts for quality of code (read how) based on Readability and suggests if you should proceed with the candidate. Codu can evaluate for any coding challenge that you decide to give, as long as you're checking for ability to write good code. Codu looks at Readability, Modelling (only Java supported as of now) and Correctness to arrive at an overall recommendation.

You can also see classes and methods in the code so you can understand the modelling without opening the code submission.

3. Candidates upload code and you get notified

We generate a unique link for your company that can be shared with candidates to upload their code/github links.

4. Manage candidate code submissions