Tooling

Recommended tooling, and tools you should (or must) avoid.

On this page, we've collected some tools that you may find useful when working on the site. Additionally, we've listed some tools that you should avoid.

Please remember not to use the restricted tools without proper consideration, and ensure you don't use any of the tools we prohibit.

We highly recommend you consider using the following tools:

  • LanguageTool, a comprehensive spelling, style and grammar checker. It is important you format your written content properly, and that you write in proper English, with correct spelling and grammar. LanguageTool is one of the best tools available for keeping your prose up to our standards.

    • For details on the many plugins and other tools that integrate with LanguageTool, please see this page.
  • Otter, a semi-automated transcription tool. If you decide to include audio files with your submission, it is important to include a transcription.

    Many tools exist for this purpose, but we've found that Otter is generally effective, provides easy-to-use tools, and makes it easy to collaborate with others.

    If you decide to use an external transcription tool, please understand that we won't accept links to transcriptions hosted on external sites. Instead, you must include the transcription in a dedicated article tagged with the transcription tag, and link to it alongside your file.

  • Defensive and offensive AI model poisoning concepts and tools. These will help to protect your work from unauthorised or exploitative use, and we highly recommend looking into them if you have any concerns about generative AI tooling.

    • AntiFake, an offensive tool to disrupt AI models scraping voice samples without creators' consent, causing them to generate audio that doesn't sound like the source material.
    • Glaze, a defensive tool to prevent art style theft.
    • Microsoft's code generation poisoning experiment, a proof-of-concept approach for poisoning code generation models.
    • Nightshade, an offensive tool to disrupt AI models scraping artwork without consent.
    • Trojan Puzzle, an offensive approach that disrupts AI code generation models and causes them to generate insecure or malicious code.

Restricted Tools

We restrict the following tools, and you should avoid using them:

  • Generative AI tooling, classified as follows. Please note that we have no interest in removing this policy, or discussing the merits and demerits of generative AI.

Prohibited Tools

You may not use the following tools in your submissions. If you submit work you created using these tools, we will reject your submission.

Repeated violations may result in you being banned from our GitHub organisation.

  • Generative AI tooling, classified as follows. Please note that we have no interest in removing this policy, or discussing the merits and demerits of generative AI.

Created:
Last edited: