Music Perception Engine

Our technology is the result of more than 10 years of research in music technology, artificial intelligence, signal analysis and advanced mathematics. Thanks to the power of artificial intelligence, our algorithm constructs its own representation of music.

image

The research is oriented towards two main areas: estimation of similarity (music2vec) and classification of tracks. Up to recently, none of the technologies at hand were sufficiently powerful for the requirements of the music industry. As researchers at the IRCAM Institute, the idea was born to try to beat the state of the art in music recommendation. Today we have reached our goal and we can proudly entitle ourselves world leaders in the area.

image

The first version of our technology was awarded at the 2011 MIREX (Music Information Retrieval Evaluation EXchange) competition and has never been beaten since. In 4 years, we have even managed to increase its performance by more than 70%.

Our powerful music analysis technology is made easily accessible by an API, empowering developers from all over the world to build a new generation of intelligent music applications.

image

Teaching machines to feel the music

we are building a perception engine

We are a team of researchers: sound is our profession, our passion. We have manufactured an artificial intelligence able to actually listen to the music and make a representation out of it. Since man and machine are two different things, the human way to understand music was an impediment; we had to first find a way to transform audio signals into useful data, and then teach these to the machine.

Acoustic Similarity

we make sense of music beyond language

image

To determine the similarity between two musical tracks is far from an easy task, since it stretches far beyond language and concrete concepts such as genre, tempo or instruments. Subtle subjects such as emotion, ambience and style have to be understood and accounted for. To perform this, our technology extracts a signature from each track consisting of several thousand dimensions. In this way, we can represent the track in a 1000-dimensional music space. The proximity between two tracks equals the similarity between them, just like when calculating the distance between two cities on a map.

Automatic Classification

we describe music in words

image

Tagging is the art of automatically classifying and annotating the tracks of a catalog or platform. Since the understanding of our technology goes well beyond what words can convey, to paint such a global picture as possible, we use several descriptive categories: genre - instrument - voice - ambience - usage.

Technology in action

stay informed with our newsletter

Product release, tech meet-ups...be the first to know.

your email
Think we might help? We’d like to hear from you contact@niland.io