Your candidate, your coding challenges, our evaluation. Automated code reviews of your candidates. Designed for what matters in the real world - writing clean code. Currently in private beta.
Codu.ai is an AI code evaluation assistant, the first in the world to evaluate for clean code. And that too, for any coding challenge, solved by your candidates. Codu looks for things that matter while writing production quality code - parameters that have not been automated. Until now.
We are launching an invite-only private beta. Apply here if you want access, and we'll get back.
You can send your test via Codu and get it evaluated automatically.
Does your tech team spend time trying to get a candidate's code running and checking whether they covered all test cases? Well, no more. Codu can check output for any coding test you have. Just give us a config file with expected input and output and Codu will automatically test it for you.
Our machine learning model predicts for quality of code (read how) based on Readability and suggests if you should proceed with the candidate. Codu can evaluate for any coding challenge that you decide to give, as long as you're checking for ability to write good code. Codu looks at Readability, Modelling (only Java supported as of now) and Correctness to arrive at an overall recommendation.
You can also see classes and methods in the code so you can understand the modelling without opening the code submission.
We generate a unique link for your company that can be shared with candidates to upload their code/github links.