Replies: 5 comments
-
Hi @CTalvio, About Jellify I really like their app and I’m in strict contact with Violet. We would like to create a Jellyfin plugin with AudioMuse-AI, the main challenge is integrating Essentia (that is C++ and Python wrapper) in a Jellyfin plugin that instead need to be written in C#. I also talking with some dev of Finamp that is another Jellyfin mobile app that I really like. I think they would like AudioMuse-Ai or be integrated for the instant mix feature (for which I’m improving the similarity feature). The point is still having a plugin for an easy integration with the mobile app. Meanwhile someone more capable than me with this programming language (C#) help with this I hope that you like the actual minimalist front-end. |
Beta Was this translation helpful? Give feedback.
-
Yes. I've experimented with Musicnn and Essentia myself. I mentioned beets-xtractor, which is a plugin for the python-based music file tagging and organizing cli tool, beets. The plugin aims to analyze your music library using essentia, and to then embed all the extracted metadata into the tags of the actual files, allowing beets or any other application that can read them to access those details. Getting that working and taking it all the way to something that improves my experience of actually listening to my library has been rocky. AudioMuseAI netted me much more immediate results. But what I really look to do, is to add genre tags to ALL my music. Essentially, what I'm looking for is a well-performing endless auto-play. I'm trying to get there by improving the quality of my metadata for use with my current client of choice. Symfonium. I also use the Instant Mix feature of Jellyfin, when playing music from my desktop/laptop. Symphonium manages excellent endless playback of my music (though not as good as JFs instant mix). Depending on the media server used, Symfoniium can auto-queue in several ways, but with JF the best mode is genre based. With this it is able to maintain/develop the tone of whatever track I start with. This is pretty much how I always want to listen to my music. I have a smart playlist in Symfonium that uses a couple basic rules to display a shortlist of familiar tracks on the front page of the app, allowing me to "set the tone" for a listening session based on one instantly familiar track. I vastly prefer this over playlists created in advance, as I find it difficult to parse what exactly I'll be getting based on a playlist title, or its contents. Especially as I want the queue to contain tracks I'm not familiar with, as long as they adhere to some minimum similarity to the track I started with. Symfonium works, but this way (and with JFs isntant mix, too) it ends up excluding any tracks that do not have sufficient genre metadata, and so they will never come up in the automatic queue. Hence I miss out on some favorites, and some new discoveries. And trying to use a track with little metadata as the starting point, also has poor results. Hence, I've been endlessly trying to fill in the gaps in my metadata. Beets is extremely good at this, but for a lot of the files I have, the data simply isn't out there, or can't be reliably fetched and applied. This is where I started looking into something AI-based. Something which could analyze the music and calculate similarities between tracks. The most basic form of using this to improve the experience, would be to simply add the same human-readable genre tags to any tracks missing them in Jellyfin, allowing apps like Symfonium to include them when auto-queuing tracks. If AudioMuseAI could do this, that'd be perfect for the way I currently listen to my library. But it can probably do more. I know Essentia is able to determine tempo, mood, genre, and more. If that metadata could be stored and exposed in Jellyfin, any client could make use of it as it sees fit. The playlists AudioMuseAI already makes, are good. It achieves the goal of grouping together tracks I would play one after another. But there is currently no way for me to make such playlists on the fly, the way I'd want to. The input being a track picked in the moment (rather than a prompt), and the output being an endless queue starting with that track, rather than a collection of solely similar tracks. If that functionality existed, I would additionally want to be able to adjust how closely it sticks to the style of the input track, as well as whether it's allowed to drift to other styles of music as the queue progresses. TL;DR AudioMuseAI should make the metadata it extracts available to clients. Even just adding genre tags to tracks without them, would benefit any users who browse by genre, or are using some form of genre-based autoplay. Improving Instant Mix has similar benefits. Some of my music has higher quality metadata, and some might even have none. As such, even as I happily listen to my library, there is a portion of it being "left out". This is the problem I think AudioMuseAI could solve. |
Beta Was this translation helpful? Give feedback.
-
The functionality “similar song” in AudioMuse-Ai is what you call start with one song to set the mood and then have a continuous playing. Is the functionality that I prefer more because I start saying “I would like to listen Red Hot Chili Pepers - by the way” and then I want to keep the rhythm with similar track, maybe also discovering new one. For this functionality I also asking to developer of some Jellyfin music player (Jellify and Finamp) to directly include them in their player in order to have all in the player. I hope that this is what you do in Symfonium PLUS the power to automatically scan all the song. And do that not only based on GENRE but on embbeding that could get even more sophisticated pattern. Especially with the big genre number that exist nowadays. About adding this metadata directly on the song, this at the moment is not in scope of AudioMuse-AI, because it didn’t admin directly the song. It “read” a song, analyze it, and save the information on a database. It never interacts directly with the songs file but always call an API of Jellyfin (in the last release was introduced also the support for Navidrome). I really hope that with the future work on Audiomuse-Ai you will like it even more and continue use it. If there is any other feature that you think is interesting please ask and we will look if it possible to develop. |
Beta Was this translation helpful? Give feedback.
-
I'm not talking about modifying the metadata embedded in the media files. Rather, adding the metadata stored in the AudioMuseAI database, to the database that Jellyfin has. It is able to store a bunch of extra data not from the media files, and it already does, if it fetched metadata from the internet through a metadata provider or plugin. It is possible to add or remove metadata to media items, and that metadata is stored by Jellyfin, without modifying the media files. AudioMuseAI could add the metadata it extracts to Jellyfin, (at least for the metadata types Jellyfin supports) that metadata can then be provided to ANY client using the Jellyfin API, no extra client implementations necessary (at least for genre tags). And there is no need to modify the media files! Here is the relevant API endpoint that would allow AudioMuseAI to add genre tags to an item it has analyzed. The only limitation is that Jellyfin is missing a lot of the metadata fields that AudioMuseAI is able to provide. But implementing them, rather than working around Jellyfin, would be preferable. It would allow clients to stick to the Jellyfin API, rather than needing to support an additional parallel application. It would allow Jellyfin to directly utilize that metadata for the Instant Mix feature, as well as the "more like this" Ultimately, if AudioMuseAI becomes a plugin, it should work like any other metadata provider plugin, except that instead of providing metadata by getting it from the internet, it does so by analyzing the files. |
Beta Was this translation helpful? Give feedback.
-
Ok thanks for the clarification, I’ll definitely take a look to the API to add genre to a song. Really thanks for your suggestions! For the future I’ll definitely like more integration directly in Jellyfin or with a Jellyfin plugin. I hope that the actual Minim Viable Product, with his minimal frontend, demonstrates the potentially of AudioMuse-AI and attract some C# developers in helping me on that. My vision is that song spectrogram analysis for automatic playlist creation and instant mix MUST be open, free and available for everyone. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
I've been on another stint to improve my experience using Jellyfin for my music collection. My big never-ending mission has been to tag ALL my music with genre tags to create relationships between different parts of my collection, and using those tags to generate playlist that can include any track in my library, while excluding the types of music I'm not in the mood for.
AudioMuse-AI has been the easiest way to get essentia to process my music, as compared to tools like beets-xtractor.
I read that your future plans involve figuring out how to turn this into a plugin. In my recent exploring, I also found Jellify, a new mobile client for Jellyfin. Are you aware that the Jellify team are working on a plugin to achieve some of the things AudioMuse-AI can already do? They seem to have a plugin going, and are in turn in the process of figuring out essentia.
Their plugin repo is here: https://github.com/Jellify-Music/Plugin
Beta Was this translation helpful? Give feedback.
All reactions