![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Audio processing
Inside Computer Music is an investigation of how new technological developments have influenced the creative possibilities of composers of computer music in the last 50 years. This book combines detailed research into the development of computer music techniques with nine case studies that analyze key works in the musical and technical development of computer music. The book's companion website offers demonstration videos of the techniques used and downloadable software. There, readers can view interviews and test emulations of the software used by the composers for themselves. The software also presents musical analyses of each of the nine case studies to enable readers to engage with the musical structure aurally and interactively.
Video games open portals into fantastical worlds where imaginative play prevails. The virtual medium seemingly provides us with ample opportunities to behave and act out with relative safety and impunity. Or does it? Sound Play explores the aesthetic, ethical, and sociopolitical stakes of our engagements with gaming's audio phenomena-from sonic violence to synthesized operas, from democratic music-making to vocal sexual harassment. Author William Cheng shows how the simulated environments of games empower designers, composers, players, and scholars to test and tinker with music, noise, speech, and silence in ways that might not be prudent or possible in the real world. In negotiating utopian and alarmist stereotypes of video games, Sound Play synthesizes insights from across musicology, sociology, anthropology, communications, literary theory, and philosophy. With case studies that span Final Fantasy VI, Silent Hill, Fallout 3, The Lord of the Rings Online, and Team Fortress 2, this book insists that what we do in there-in the safe, sound spaces of games-can ultimately teach us a great deal about who we are and what we value (musically, culturally, humanly) out here.
In Max/MSP/Jitter for Music, expert author and music technologist V. J. Manzo provides a user-friendly introduction to a powerful programming language that can be used to write custom software for musical interaction. Through clear, step-by-step instructions illustrated with numerous examples of working systems, the book equips readers with everything they need to know in order to design and complete meaningful music projects. The book also discusses ways to interact with software beyond the mouse and keyboard through use of camera tracking, pitch tracking, video game controllers, sensors, mobile devices, and more. The book does not require any prerequisite programming skills, but rather walks readers through a series of small projects through which they will immediately begin to develop software applications for practical musical projects. As the book progresses, and as the individual's knowledge of the language grows, the projects become more sophisticated. This new and expanded second edition brings the book fully up-to-date including additional applications in integrating Max with Ableton Live. It also includes a variety of additional projects as part of the final three project chapters. The book is of special value both to software programmers working in Max/MSP/Jitter and to music educators looking to supplement their lessons with interactive instructional tools, develop adaptive instruments to aid in student composition and performance activities, and create measurement tools with which to conduct music education research.
Electronic music instruments weren't called synthesizers until the
1950s, but their lineage began in 1919 with Russian inventor Lev
Sergeyevich Termen's development of the Etherphone, now known as
the Theremin. From that point, synthesizers have undergone a
remarkable evolution from prohibitively large mid-century models
confined to university laboratories to the development of musical
synthesis software that runs on tablet computers and portable media
devices.
Over the last century, developments in electronic music and art have enabled new possibilities for creating audio and audio-visual artworks. With this new potential has come the possibility for representing subjective internal conscious states, such as the experience of hallucinations, using digital technology. Combined with immersive technologies such as virtual reality goggles and high-quality loudspeakers, the potential for accurate simulations of conscious encounters such as Altered States of Consciousness (ASCs) is rapidly advancing. In Inner Sound, author Jonathan Weinel traverses the creative influence of ASCs, from Amazonian chicha festivals to the synaesthetic assaults of neon raves; and from an immersive outdoor electroacoustic performance on an Athenian hilltop to a mushroom trip on a tropical island in virtual reality. Beginning with a discussion of consciousness, the book explores how our subjective realities may change during states of dream, psychedelic experience, meditation, and trance. Taking a broad view across a wide range of genres, Inner Sound draws connections between shamanic art and music, and the modern technoshamanism of psychedelic rock, electronic dance music, and electroacoustic music. Going beyond the sonic into the visual, the book also examines the role of altered states in film, visual music, VJ performances, interactive video games, and virtual reality applications. Through the analysis of these examples, Weinel uncovers common mechanisms, and ultimately proposes a conceptual model for Altered States of Consciousness Simulations (ASCSs). This theoretical model describes how sound can be used to simulate various subjective states of consciousness from a first-person perspective, in an interactive context. Throughout the book, the ethical issues regarding altered states of consciousness in electronic music and audio-visual media are also examined, ultimately allowing the reader not only to consider the design of ASCSs, but also the implications of their use for digital society.
With Computational Thinking in Sound, veteran educators Gena R. Greher and Jesse M. Heines provide the first book ever written for music fundamentals educators which is devoted specifically to music, sound, and technology. The authors demonstrate how the range of mental tools in computer science - for example, analytical thought, system design, and problem design and solution - can be fruitfully applied to music education, including examples of successful student work. While technology instruction in music education has traditionally focused on teaching how computers and software work to produce music, Greher and Heines offer context: a clear understanding of how music technology can be structured around a set of learning challenges and tasks of the type common in computer science classrooms. Using a learner-centered approach that emphasizes project-based experiences, the book provides music educators with multiple strategies to explore, create, and solve problems with music and technology in equal parts. It also provides examples of hands-on activities which encourage students, alone and in interdisciplinary groups, to explore the basic principles that underlie today's music technology and which expose them to current multimedia development tools.
An Introduction to Audio Content Analysis Enables readers to understand the algorithmic analysis of musical audio signals with AI-driven approaches An Introduction to Audio Content Analysis serves as a comprehensive guide on audio content analysis explaining how signal processing and machine learning approaches can be utilized for the extraction of musical content from audio. It gives readers the algorithmic understanding to teach a computer to interpret music signals and thus allows for the design of tools for interacting with music. The work ties together topics from audio signal processing and machine learning, showing how to use audio content analysis to pick up musical characteristics automatically. A multitude of audio content analysis tasks related to the extraction of tonal, temporal, timbral, and intensity-related characteristics of the music signal are presented. Each task is introduced from both a musical and a technical perspective, detailing the algorithmic approach as well as providing practical guidance on implementation details and evaluation. To aid in reader comprehension, each task description begins with a short introduction to the most important musical and perceptual characteristics of the covered topic, followed by a detailed algorithmic model and its evaluation, and concluded with questions and exercises. For the interested reader, updated supplemental materials are provided via an accompanying website. Written by a well-known expert in the music industry, sample topics covered in Introduction to Audio Content Analysis include: Digital audio signals and their representation, common time-frequency transforms, audio features Pitch and fundamental frequency detection, key and chord Representation of dynamics in music and intensity-related features Beat histograms, onset and tempo detection, beat histograms, and detection of structure in music, and sequence alignment Audio fingerprinting, musical genre, mood, and instrument classification An invaluable guide for newcomers to audio signal processing and industry experts alike, An Introduction to Audio Content Analysis covers a wide range of introductory topics pertaining to music information retrieval and machine listening, allowing students and researchers to quickly gain core holistic knowledge in audio analysis and dig deeper into specific aspects of the field with the help of a large amount of references.
Multimodal Behavioral Analysis in the Wild: Advances and Challenges presents the state-of- the-art in behavioral signal processing using different data modalities, with a special focus on identifying the strengths and limitations of current technologies. The book focuses on audio and video modalities, while also emphasizing emerging modalities, such as accelerometer or proximity data. It covers tasks at different levels of complexity, from low level (speaker detection, sensorimotor links, source separation), through middle level (conversational group detection, addresser and addressee identification), and high level (personality and emotion recognition), providing insights on how to exploit inter-level and intra-level links. This is a valuable resource on the state-of-the- art and future research challenges of multi-modal behavioral analysis in the wild. It is suitable for researchers and graduate students in the fields of computer vision, audio processing, pattern recognition, machine learning and social signal processing.
The communication field is evolving rapidly in order to keep up with society's demands. As such, it becomes imperative to research and report recent advancements in computational intelligence as it applies to communication networks. The Handbook of Research on Recent Developments in Intelligent Communication Application is a pivotal reference source for the latest developments on emerging data communication applications. Featuring extensive coverage across a range of relevant perspectives and topics, such as satellite communication, cognitive radio networks, and wireless sensor networks, this book is ideally designed for engineers, professionals, practitioners, upper-level students, and academics seeking current information on emerging communication networking trends.
Understanding Video Game Music develops a musicology of video game music by providing methods and concepts for understanding music in this medium. From the practicalities of investigating the video game as a musical source to the critical perspectives on game music - using examples including Final Fantasy VII, Monkey Island 2, SSX Tricky and Silent Hill - these explorations not only illuminate aspects of game music, but also provide conceptual ideas valuable for future analysis. Music is not a redundant echo of other textual levels of the game, but central to the experience of interacting with video games. As the author likes to describe it, this book is about music for racing a rally car, music for evading zombies, music for dancing, music for solving puzzles, music for saving the Earth from aliens, music for managing a city, music for being a hero; in short, it is about music for playing.
PRO TOOLS 101 OFFICIAL COURSEWARE takes a comprehensive approach to learning the fundamentals of Pro Tools systems. Now updated for Pro Tools 9 software, this new edition from the definitive authority on Pro Tools covers everything you need to know to complete a Pro Tools project. Learn to build sessions that include multitrack recordings of live instruments, MIDI sequences, software synthesizers, and virtual instruments. Through hands-on tutorials, develop essential techniques for recording, editing, and mixing. The included DVD-ROM offers tutorial files and videos, additional documentation, and Pro Tools sessions to accompany the projects in the text. Developed as the foundation course of the official Avid Pro Tools Certification program, the guide can be used to learn on your own or to pursue formal Pro Tools certification through a an Avid Authorized Training Partner. Join the ranks of audio professionals around the world as you unleash the creative power of your Pro Tools system.
Although there have been two main perspectives on the nature of music through systematic and cultural musicology, music informatics has emerged as an interdisciplinary research area which provides a different idea on the nature of music through computer technologies. Structuring Music through Markup Language: Designs and Architectures offers a different approach to music by focusing on the information organization and the development of XML-based language. This book aims to offer a new set of tools on for practical implementations and a new investigation into the theory of music.
In the literature of information science, a number of studies have been carried out attempting to model cognitive, affective, behavioral, and contextual factors associated with human information seeking and retrieval. On the other hand, only a few studies have addressed the exploration of creative thinking in music, focusing on understanding and describing individuals' information seeking behavior during the creative process. Trends in Music Information Seeking, Behavior, and Retrieval for Creativity connects theoretical concepts in information seeking and behavior to the music creative process. This publication presents new research, case studies, surveys, and theories related to various aspects of information retrieval and the information seeking behavior of diverse scholarly and professional music communities. Music professionals, theorists, researchers, and students will find this publication an essential resource for their professional and research needs.
It is clear that the digital age has fully embraced music production, distribution, and transcendence for a vivid audience that demands more music both in quantity and versatility. However, the evolving world of digital music production faces a calamity of tremendous proportions: the asymmetrically increasing online piracy that devastates radio stations, media channels, producers, composers, and artists, severely threatening the music industry. Digital Tools for Computer Music Production and Distribution presents research-based perspectives and solutions for integrating computational methods for music production, distribution, and access around the world, in addition to challenges facing the music industry in an age of digital access, content sharing, and crime. Highlighting the changing scope of the music industry and the role of the digital age in such transformations, this publication is an essential resource for computer programmers, sound engineers, language and speech experts, legal experts specializing in music piracy and rights management, researchers, and graduate-level students across disciplines.
The unique research area of audio-visual speech recognition has attracted much interest in recent years as visual information about lip dynamics has been shown to improve the performance of automatic speech recognition systems, especially in noisy environments.""Visual Speech Recognition: Lip Segmentation and Mapping"" presents an up-to-date account of research done in the areas of lip segmentation, visual speech recognition, and speaker identification and verification. A useful reference for researchers working in this field, this book contains the latest research results from renowned experts with in-depth discussion on topics such as visual speaker authentication, lip modeling, and systematic evaluation of lip features.
Tanja Schultz and Katrin Kirchhoff have compiled a comprehensive
overview of speech processing from a multilingual perspective. By
taking this all-inclusive approach to speech processing, the
editors have included theories, algorithms, and techniques that are
required to support spoken input and output in a large variety of
languages. This book presents a comprehensive introduction to
research problems and solutions, both from a theoretical as well as
a practical perspective, and highlights technology that
incorporates the increasing necessity for multilingual applications
in our global community.
The future of music archiving and search engines lies in deep learning and big data. Music information retrieval algorithms automatically analyze musical features like timbre, melody, rhythm or musical form, and artificial intelligence then sorts and relates these features. At the first International Symposium on Computational Ethnomusicological Archiving held on November 9 to 11, 2017 at the Institute of Systematic Musicology in Hamburg, Germany, a new Computational Phonogram Archiving standard was discussed as an interdisciplinary approach. Ethnomusicologists, music and computer scientists, systematic musicologists as well as music archivists, composers and musicians presented tools, methods and platforms and shared fieldwork and archiving experiences in the fields of musical acoustics, informatics, music theory as well as on music storage, reproduction and metadata. The Computational Phonogram Archiving standard is also in high demand in the music market as a search engine for music consumers. This book offers a comprehensive overview of the field written by leading researchers around the globe.
The author presents Probatio, a toolkit for building functional DMI (digital musical instruments) prototypes, artifacts in which gestural control and sound production are physically decoupled but digitally mapped. He uses the concept of instrumental inheritance, the application of gestural and/or structural components of existing instruments to generate ideas for new instruments. To support analysis and combination, he then leverages a traditional design method, the morphological chart, in which existing artifacts are split into parts, presented in a visual form and then recombined to produce new ideas. And finally he integrates the concept and the method in a concrete object, a physical prototyping toolkit for building functional DMI prototypes: Probatio. The author's evaluation of this modular system shows it reduces the time required to develop functional prototypes. The book is useful for researchers, practitioners, and graduate students in the areas of musical creativity and human-computer interaction, in particular those engaged in generating, communicating, and testing ideas in complex design spaces.
This book explores how the rise of widely available digital technology impacts the way music is produced, distributed, promoted, and consumed, with a specific focus on the changing relationship between artists and audiences. Through in-depth interviewing, focus group interviewing, and discourse analysis, this study demonstrates how digital technology has created a closer, more collaborative, fluid, and multidimensional relationship between artist and audience. Artists and audiences are simultaneously engaged with music through technology-and technology through music-while negotiating personal and social aspects of their musical lives. In light of consistent, active engagement, rising co-production, and collaborative community experience, this book argues we might do better to think of the audience as accomplices to the artist.
Learn how to program JavaScript while creating interactive audio applications with JavaScript for Sound Artists: Learn to Code With the Web Audio API! William Turner and Steve Leonard showcase the basics of JavaScript language programing so that readers can learn how to build browser based audio applications, such as music synthesizers and drum machines. The companion website offers further opportunity for growth. Web Audio API instruction includes oscillators, audio file loading and playback, basic audio manipulation, panning and time. This book encompasses all of the basic features of JavaScript with aspects of the Web Audio API to heighten the capability of any browser. Key Features Uses the readers existing knowledge of audio technology to facilitate learning how to program using JavaScript. The teaching will be done through a series of annotated examples and explanations. Downloadable code examples and links to additional reference material included on the books companion website. This book makes learning programming more approachable to nonprofessional programmers The context of teaching JavaScript for the creative audio community in this manner does not exist anywhere else in the market and uses example-based teaching |
![]() ![]() You may like...
New Era for Robust Speech Recognition…
Shinji Watanabe, Marc Delcroix, …
Hardcover
R5,716
Discovery Miles 57 160
Intelligent Music Information Systems…
Jialie Shen, John Shepherd, …
Hardcover
R4,991
Discovery Miles 49 910
Music and Human-Computer Interaction
Simon Holland, Katie Wilkie, …
Hardcover
R3,523
Discovery Miles 35 230
Self-Learning Speaker Identification - A…
Tobias Herbig, Franz Gerl, …
Hardcover
R3,031
Discovery Miles 30 310
|