“Biro” by Honeyblood (2015)

 

 

Everyone is an author with something to say.
Everyone’s got a picture to paint.
My story will turn gold, or so I’ve been told.
It’s got ‘best-seller” written all over it.

All the pain you’ve been through will be the making of you.
All the pain you’ve been through, will be the making.

Nobody knows the trouble you’ve seen,
Nobody really comes close.
But you’re over it now, you can boast.
Each page printed is honest truth.
Troubles when they plagiarise you.
Shit to brand new, you’ll get around to it soon.

All the pain you’ve been through will be the making of you.
Tear the heart in two, it’ll be the making of you.

If I threw my pen into the sea,
I know they’ll be someone to write after me.

Will be the making of you.
All the pain you’ve been through will be the making of you.

Honeyblood

Skyline Music video for Karma Fields by ravenkwok (2015)

 

“Skyline is a code-based generative music video directed and programmed by Raven Kwok for the track Skyline (itunes.apple.com/us/album/skyline-single/id1039135793) by Karma Fields (soundcloud.com/karmafields). The entire music video consists of multiple stages that are programmed and generated using Processing.

“One of the core principles for generating the visual patterns in Skyline is Voronoi tessellation. This geometric model dates back to 1644 in René Descartes’s vortex theory of planetary motion, and has been widely used by computational artists, for example, Robert Hodgin (vimeo.com/207637), Frederik Vanhoutte (vimeo.com/86820638), Diana Lange (flickr.com/photos/dianalange/sets/72157629453008849/), Jon McCormack (jonmccormack.info/~jonmc/sa/artworks/voronoi-wall/), etc.

“In Skyline’s systems, seeds for generating the diagram are sorted into various types of agents following certain behaviors and appearance transformations. They are driven by either the song’s audio spectrum with different customized layouts, or animated sequence of the vocalist, collectively forming a complex and organic outcome.”

 

How Does Your Phone Know This Is A Dog?

Last year, we (a couple of people who knew nothing about how voice search works) set out to make a video about the research that’s gone into teaching computers to recognize speech and understand language.

Making the video was eye-opening and brain-opening. It introduced us to concepts we’d never heard of – like machine learning and artificial neural networks – and ever since, we’ve been kind of fascinated by them. Machine learning, in particular, is a very active area of Computer Science research, with far-ranging applications beyond voice search – like machine translationimage recognition and description, and Google Voice transcription.

So… still curious to know more (and having just started this project) we found Google researchers Greg Corrado and Christopher Olahand ambushed them with our machine learning questions.

More Here

via