canine's blog

By canine, history, 4 weeks ago, In English

there have been quite a few attempts to create language models able to solve competitive programming problems. i'm nowhere near this ambitious, but i think providing quality problem analyses by providing more detailed tags compared to what Codeforces currently offers, showing who people took which approach to solving a problem, and creating problem embeddings to enhance problem search (e.g. capable of finding a problem similar to the intersection of two problems) could have a more positive direct impact on competitive programmers than behemoths like AlphaCode. don't you wish you knew what proportion of submissions to a problem used push-relabel vs ford-fulkerson, or whether a problem tagged data structures used lazy segment trees or simply a fenwick tree?

if you think this is a good idea -- i might be going insane with tunnel vision because i've spent a lot of time on this concept -- you can help me by annotating your own solutions on my site. if you find anything missing there (tags or features), or have other concerns please let me know. i've trained an AST-based ~30M BERT-like transformer on masked language modeling and just need some data to fine-tune it to predict specific tags. if just 50 people annotate 20 submissions, that would definitely be enough! this will also enhance the embedding, helping it encode salient aspects of approaches to a problem. i've tried using ChatGPT for this task, and the results were pretty bad. i think better, or at least human, data could fix this.

tldr: please consider donating data to my little project here

thank you! :>)

Full text and comments »

  • Vote: I like it
  • +50
  • Vote: I do not like it