![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Multimedia
This book features papers presented at IIH-MSP 2018, the 14th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. The scope of IIH-MSP included information hiding and security, multimedia signal processing and networking, and bio-inspired multimedia technologies and systems. The book discusses subjects related to massive image/video compression and transmission for emerging networks, advances in speech and language processing, recent advances in information hiding and signal processing for audio and speech signals, intelligent distribution systems and applications, recent advances in security and privacy for multimodal network environments, multimedia signal processing, and machine learning. Presenting the latest research outcomes and findings, it is suitable for researchers and students who are interested in the corresponding fields. IIH-MSP 2018 was held in Sendai, Japan on 26-28 November 2018. It was hosted by Tohoku University and was co-sponsored by the Fujian University of Technology in China, the Taiwan Association for Web Intelligence Consortium in Taiwan, and the Swinburne University of Technology in Australia, as well as the Fujian Provincial Key Laboratory of Big Data Mining and Applications (Fujian University of Technology) and the Harbin Institute of Technology Shenzhen Graduate School in China.
Covering key areas of evaluation and methodology, client-side applications, specialist and novel technologies, along with initial appraisals of disabilities, this important book provides comprehensive coverage of web accessibility. Written by leading experts in the field, it provides an overview of existing research and also looks at future developments, providing a much deeper insight than can be obtained through existing research libraries, aggregations, or search engines.
Based on more than 10 years of teaching experience, Blanken and his coeditors have assembled all the topics that should be covered in advanced undergraduate or graduate courses on multimedia retrieval and multimedia databases. The single chapters of this textbook explain the general architecture of multimedia information retrieval systems and cover various metadata languages such as Dublin Core, RDF, or MPEG. The authors emphasize high-level features and show how these are used in mathematical models to support the retrieval process. For each chapter, there 's detail on further reading, and additional exercises and teaching material is available online.
This book covers various aspects of spatial data modelling specifically regarding three-dimensional (3D) modelling and structuring. The realization of "true" 3D geoinformation spatial systems requires a high input, and the developmental process is taking place in various research centers and universities around the globe. The development of such systems and solutions, including the modelling theories are presented in this book.
The polygon-mesh approach to 3D modeling was a huge advance, but today its limitations are clear. Longer render times for increasingly complex images effectively cap image complexity, or else stretch budgets and schedules to the breaking point. Point-based graphics promises to change all that, and this book explains how. Comprised of contributions from leaders in the development and application of this technology, Point-Based Graphics examines it from all angles, beginning with the way in which the latest photographic and scanning devices have enabled modeling based on true geometry, rather than appearance. From there, it s on to the methods themselves. Even though
point-based graphics is in its infancy, practitioners have already
established many effective, economical techniques for achieving all
the major effects associated with traditional 3D Modeling and
rendering. You ll learn to apply these techniques, and you ll also
learn how to create your own. The final chapter demonstrates how to
do this using Pointshop3D, an open-source tool for developing new
point-based algorithms. A copy of this tool can be found on the
companion website.
In the early days of the Web a need was recognized for a language
to display 3D objects through a browser. An HTML-like language,
VRML, was proposed in 1994 and became the standard for describing
interactive 3D objects and worlds on the Web. 3D Web courses were
started, several best-selling books were published, and VRML
continues to be used today. However VRML, because it was based on
HTML, is a stodgy language that is not easy to incorporate with
other applications and has been difficult to add features to.
Meanwhile, applications for interactive 3D graphics have been
exploding in areas such as medicine, science, industry, and
entertainment. There is a strong need for a set of modern Web-based
technologies, applied within a standard extensible framework, to
enable a new generation of modeling & simulation applications
to emerge, develop, and interoperate. X3D is the next generation
open standard for 3D on the web. It is the result of several years
of development by the Web 3D Consortium's X3D Task Group. Instead
of a large monolithic specification (like VRML), which requires
full adoption for compliance, X3D is a component-based architecture
that can support applications ranging from a simple non-interactive
animation to the latest streaming or rendering applications. X3D
replaces VRML, but also provides compatibility with existing VRML
content and browsers. Don Brutzman organized the first symposium on
VRML and is playing a similar role with X3D; he is a founding
member of the consortium. Len Daly is a professional member of the
consortium and both Len and Don have been involved with the
development of the standard from the start.
This book includes a short history of interactive narrative and an account of a small group collaboratively authored social media narrative: Romeo and Juliet on Facebook: After Love Comes Destruction. At the forefront of narrative innovation are social media channels - speculative spaces for creating and experiencing stories that are interactive and collaborative. Media, however, is only the access point to the expressiveness of narrative content. Wikis, messaging, mash-ups, and social media (Facebook, Twitter, YouTube and others) are on a trajectory of participatory story creation that goes back many centuries. These forms offer authors ways to create narrative meaning that reflects our current media culture, as the harlequinade reflected the culture of the 18th century, and as the volvelle reflected that of the 13th century. Interactivity, Collaboration, and Authoring in Social Media first prospects the last millennium for antecedents of today's authoring practices. It does so with a view to considering how today's digital manifestations are a continuation, perhaps a reiteration, perhaps a novel pioneering, of humans' abiding interest in interactive narrative. The book then takes the reader inside the process of creating a collaborative, interactive narrative in today's social media through an authoring experience undertaken by a group of graduate students. The engaging mix of blogs, emails, personal diaries , and fabricated documents used to create the narrative demonstrates that a social media environment can facilitate a meaningful and productive collaborative authorial experience and result in an abundance of networked, personally expressive, and visually and textually referential content. The resulting narrative, After Love Comes Destruction, based in Shakespeare's Romeo and Juliet, shows how a generative narrative space evolved around the students' use of social media in ways they had not previously considered both for authoring and for delivery of their final narrative artifact.
This book provides insight into the challenges in providing data authentication over wireless communication channels. The authors posit that established standard authentication mechanisms - for wired devices - are not sufficient to authenticate data, such as voice, images, and video over wireless channels. The authors propose new mechanisms based on the so-called soft authentication algorithms, which tolerate some legitimate modifications in the data that they protect. The authors explain that the goal of these algorithms is that they are tolerant to changes in the content but are still able to identify the forgeries. The authors go on to describe how an additional advantage of the soft authentication algorithms is the ability to identify the locations of the modifications and correct them if possible. The authors show how to achieve this by protecting the data features with the help of error correcting codes. The correction methods are typically based on watermarking, as the authors discuss in the book. Provides a discussion of data (particularly image) authentication methods in the presence of noise experienced in wireless communication; Presents a new class of soft authentication methods, instead of the standard hard authentication methods, used to tolerate minor changes in image data; Features authentication methods based on the usage of authentication tags as well as digital watermarks.
This book offers a comprehensive explanation of iterated function systems and how to use them in generation of complex objects. Discussion covers the most popular fractal models applied in the field of image synthesis; surveys iterated function system models; explores algorithms for creating and manipulating fractal objects, and techniques for implementing the algorithms, and more. The book includes both descriptive text and pseudo-code samples for the convenience of graphics application programmers.
Correcting the Great Mistake People often mistake one thing for another. That's human nature. However, one would expect the leaders in a particular ?eld of endeavour to have superior ab- ities to discriminate among the developments within that ?eld. That is why it is so perplexing that the technology elite - supposedly savvy folk such as software developers, marketers and businessmen - have continually mistaken Web-based graphics for something it is not. The ?rst great graphics technology for the Web, VRML, has been mistaken for something else since its inception. Viewed variously as a game system, a format for architectural walkthroughs, a platform for multi-user chat and an augmentation of reality, VRML may qualify as the least understood invention in the history of inf- mation technology. Perhaps it is so because when VRML was originally introduced it was touted as a tool for putting the shopping malls of the world online, at once prosaic and horrifyingly mundane to those of us who were developing it. Perhaps those ?rst two initials,"VR,"created expectations of sprawling, photorealistic f- tasy landscapes for exploration and play across the Web. Or perhaps the magnitude of the invention was simply too great to be understood at the time by the many, ironically even by those spending the money to underwrite its development. Regardless of the reasons, VRML suffered in the mainstream as it was twisted to meet unintended ends and stretched far beyond its limitations.
This comprehensive book draws together experts to explore how knowledge technologies can be exploited to create new multimedia applications, and how multimedia technologies can provide new contexts for the use of knowledge technologies. Thorough coverage of all relevant topics is given. The step-by-step approach guides the reader from fundamental enabling technologies of ontologies, analysis and reasoning, through to applications which have hitherto had less attention.
Here is a thorough, not-overly-complex introduction to the three technical foundations for multimedia applications across the Internet: communications (principles, technologies and networking); compressive encoding of digital media; and Internet protocol and services. All the contributing systems elements are explained through descriptive text and numerous illustrative figures; the result is a book well-suited toward non-specialists, preferably with technical background, who need well-composed tutorial introductions to the three foundation areas. The text discusses the latest advances in digital audio and video encoding, optical and wireless communications technologies, high-speed access networks, and IP-based media streaming, all crucial enablers of the multimedia Internet.
Research in the field of multimedia metadata is especially challenging: Lots of scientific publications and reports on research projects are published every year and the range of possible applications is diverse and huge. This book gives an overview on fundamental issues within the field of multimedia metadata focusing on contextualized, ubiquitous, accessible and interoperable services on a higher semantic level. The book in hand provides a selection of basic articles being a base for multimedia metadata research. Furthermore it presents a view on the current state of the art in multimedia metadata research. It provides information from versatile applications domains (Broadcasting, Interactive TV, E-Learning and Social Software) such as: Multimedia on the Web 2.0 - Databases for Multimedia (Meta-)Data - Multimedia Information Retrieval and Evaluation - Multimedia Metadata Standards - Ontologies for Multimedia. The multimedia metadata community (www.multimedia-metadata.info), wherefrom this book originated, brings together experts from research and industry in the area of multimedia metadata research and application development. The community bridges the gap between an academic research and an industrial scale development of innovative products. By summarizing the work of the community this book contributes to the aforementioned fields by addressing these topics for a broad range of readers.
With the fast growth ofmultimedia information, content-based video anal- ysis, indexing and representation have attracted increasing attention in re- cent years. Many applications have emerged in these areas such as video- on-demand, distributed multimedia systems, digital video libraries, distance learning/education, entertainment, surveillance and geographical information systems. The need for content-based video indexing and retrieval was also rec- ognized by ISOIMPEG, and a new international standard called "Multimedia Content Description Interface" (or in short, MPEG-7)was initialized in 1998 and finalized in September 2001. In this context, a systematic and thorough review ofexisting approaches as well as the state-of-the-art techniques in video content analysis, indexing and representation areas are investigated and studied in this book. In addition, we will specifically elaborate on a system which analyzes, indexes and abstracts movie contents based on the integration ofmultiple media modalities. Content ofeach part ofthis book is briefly previewed below. In the first part, we segment a video sequence into a set ofcascaded shots, where a shot consistsofone or more continuouslyrecorded image frames. Both raw and compressedvideo data will beinvestigated. Moreover, consideringthat there are always non-story units in real TV programs such as commercials, a novel commercial break detection/extraction scheme is developed which ex- ploits both audio and visual cues to achieve robust results. Specifically, we first employ visual cues such as the video data statistics, the camera cut fre- quency, and the existenceofdelimiting black frames between commercials and programs, to obtain coarse-level detection results.
As the visual effects industry has diversified, so too have the
books written to serve the needs of this industry. Today there are
hundreds of highly specialized titles focusing on particular
aspects of film and broadcast animation, computer graphics, stage
photography, miniature photography, color theory, and many
others.
II Challenges in Data Mapping Part II deals with one of the most challenging tasks in Interactive Visualization, mapping and teasing out information from large complex datasets and generating visual representations. This section consists of four chapters. Binh Pham, Alex Streit, and Ross Brown provide a comprehensive requirement analysis of information uncertainty visualizations. They examine the sources of uncertainty, review aspects of its complexity, introduce typical models of uncertainty, and analyze major issues in visualization of uncertainty, from various user and task perspectives. Alfred Inselberg examines challenges in the multivariate data analysis. He explains how relations among multiple variables can be mapped uniquely into ?-space subsets having geometrical properties and introduces Parallel Coordinates meth- ology for the unambiguous visualization and exploration of a multidimensional geometry and multivariate relations. Christiaan Gribble describes two alternative approaches to interactive particle visualization: one targeting desktop systems equipped with programmable graphics hardware and the other targeting moderately sized multicore systems using pack- based ray tracing. Finally, Christof Rezk Salama reviews state-of-the-art strategies for the assignment of visual parameters in scientific visualization systems. He explains the process of mapping abstract data values into visual based on transfer functions, clarifies the terms of pre- and postclassification, and introduces the state-of-the-art user int- faces for the design of transfer functions.
Fundamental solutions in understanding information have been elusive for a long time. The field of Artificial Intelligence has proposed the Turing Test as a way to test for the "smart" behaviors of computer programs that exhibit human-like qualities. Equivalent to the Turing Test for the field of Human Information Interaction (HII), getting information to the people that need them and helping them to understand the information is the new challenge of the Web era. In a short amount of time, the infrastructure of the Web became ubiquitious not just in terms of protocols and transcontinental cables but also in terms of everyday devices capable of recalling network-stored data, sometimes wire lessly. Therefore, as these infrastructures become reality, our attention on HII issues needs to shift from information access to information sensemaking, a relatively new term coined to describe the process of digesting information and understanding its structure and intricacies so as to make decisions and take action.
Soft City Culture and Technology: The Betaville Project discusses the complete cycle of conception, development, and deployment of the Betaville platform. Betaville is a massively participatory online environment for distributed 3D design and development of proposals for changes to the built environment an experimental integration of art, design, and software development for the public realm. Through a detailed account of Betaville from a Big Crazy Idea to a working "deep social medium," the author examines the current conditions of performance and accessibility of hardware, software, networks, and skills that can be brought together into a new form of open public design and deliberation space, for and spanning and integrating the disparate spheres of art, architecture, social media, and engineering. Betaville is an ambitious enterprise, of building compelling and constructive working relationships in situations where roles and disciplinary boundaries must be as agile as the development process of the software itself. Through a considered account and analysis of the interdependencies between Betaville's project design, development methods, and deployment, the reader can gain a deeper understanding of the potential socio-technical forms of New Soft Cities: blended virtual-physical worlds, whose "public works" must ultimately serve and succeed as massively collaborative works of art and infrastructure." |
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Advancements in Security and Privacy…
Ashwani Kumar, Seelam Sai Satyanarayana Reddy
Hardcover
R5,924
Discovery Miles 59 240
The Handbook on Socially Interactive…
Birgit Lugrin, Catherine Pelachaud, …
Paperback
R1,819
Discovery Miles 18 190
Computer Vision - Concepts…
Information Reso Management Association
Hardcover
R8,979
Discovery Miles 89 790
The Handbook on Socially Interactive…
Birgit Lugrin, Catherine Pelachaud, …
Hardcover
R2,272
Discovery Miles 22 720
Weaving Fire into Form - Aspirations for…
Brygg Ullmer, Orit Shaer, …
Hardcover
R2,875
Discovery Miles 28 750
|