Skip to content

Patch - 2.6.3

Choose a tag to compare

@LyubomirT LyubomirT released this 28 Mar 17:43
· 18 commits to v2-rewrite since this release
4d34aa7
PATCH_263

IntenseRP Next v2.6.3

GLM-5-Turbo, Reimagined Settings, and Parallelism!

Release Notes 🛠️

A medium-to-large release mostly focused on the desktop experience and adding some new powerful features that I have been working on for quite a while (finally ready and stacked in one release). It also ships with a redesigned settings window to make it easier to navigate and use than before.

⚙️ (NEW!!) Settings

Redesigned the structure and navigation of the settings window entirely, renamed many settings to be more human-friendly, grouped all provider behaviors under one page with tabs, made Search smarter and more universal, improved the UI/UX, and made it full-screen by default. Also restructured the setting locations to be more intuitive and logical.

⛓️ (NEW!!) Providers in Parallel

  • ADD: Experimental. This feature lets you open multiple browsers at once and route requests to multiple providers simultaneously. Extremely useful if you don't want to run around restarting the service often.

Warning

Can be heavy on RAM if you open many providers, as it opens up a browser window per provider.

🎒 (NEW!!) Loadouts

  • ADD: Requested by @adhdandy! This is a power-user feature with a configurable .json file that lets you create "profiles" for formatting and behavior settings of your models. It overrides normal settings and makes it easier to quickly switch between configurations without having to touch settings. It's a little rough for now due to the file-based interface, but I'm likely adding a GUI soon, too.

🐚 (NEW!!) Text Completions support

  • ADD: IntenseRP now accepts requests to /v1/completions and treats them as a raw prompt. It doesn't apply any formatting on top of that and simply passes the prompt to the model. In a way, it's the "power user" version for requests if you want full control over what is sent.

🔮 GLM Driver

  • ADD: Added support for GLM-5-Turbo.
  • FIX: GLM-4.7 works again.
  • FIX: Fixed model selection in the GLM driver to be state-based and more responsive with fewer errors.

✨ AI Studio Driver

  • FIX: Now no longer tries to edit identical values for temperature, top-p, etc. if your values match the ones set in the UI.

🧪 Experimental

  • REMOVE: Removed Better Model Names, it was a failed experiment and actually ruined the experience more than it improved it (according to user feedback).

Note

If IntenseRP Next has been useful to you, consider supporting development or starring it - it's a solo project that I work on in my spare time, and every bit helps to keep it running. 💙

Important

I'm looking for feedback about how IRP is doing right now, and what I should add in later updates. If you'd like to help, submit the survey form with your own thoughts about what to do next!

If you have any questions or want to chat a little, join our Discord server!


Credits

  • @LyubomirT (development and implementation)
  • All the wonderful people who reported bugs and suggested improvements. 💖