Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Desktop publishing software
Content Licensing is a wide-ranging and comprehensive guide to
providing content for dissemination electronically. It outlines a
step-by-step introduction to the why, how, and frequently asked
questions of digital content and how to license it. In addition, it
examines the context in which licensing takes place. What makes the
book unique is that it examines licensing from a range of
perspectives.
This book will assist journalists and Flash developers who are working together to bring video, audio, still photos, and animated graphics together into one complete Web-based package. This book is not just another Flash book because it focuses on the need of journalists to tell an accurate story and provide accurate graphics. This book will illustrate how to animate graphics such as maps, illustrations, and diagrams using Flash. It will show journalists how to integrate high-quality photos and audio interviews into a complete news package for the Web. Each lesson in the book is followed by a learning summary so that journalists can review the skills they have acquired along the way. In addition, the book's six case studies will allow readers to study the characteristics of news packages created with Flash by journalists and Web developers at The Washington Post, MSNBC.com, and Canadian and European news organizations.
This issue represents a broad synopsis of the past, present, and future of electronic publishing. The contributors explore the opportunities and challenges related to this new distribution channel, and the effect of this change on publishers, authors/editors, distributors, and consumers. Standing with the key to the "new world," publishers will be faced with new opportunities and nagging issues related to new competition, content control, and protection of revenue streams requiring strategies that stress rationalization of distribution systems, cross-promotion, strategic pricing, and leveraging to new revenue sources. In addition, this issue also highlights the objections of consumers to these types of change, the benefits of the new technology for consumers, and the adaptation of the publishing industry as a whole.
Comprehensive, cross-platform, DIY guide to the creation of a wide
range of graphic effects: from the scanning and manipulation of
photographs to exciting 3D graphics and the creative use of
typography. Benefit from a design professional's experience, not
the software vendors!
Comprehensive, cross-platform, DIY guide to the creation of a wide range of graphic effects: from the scanning and manipulation of photographs to exciting 3D graphics and the creative use of typography. Benefit from a design professional's experience, not the software vendors!Part one leads you through a summary of the rapid advances in graphic design software and hardware now available to the PC or Mac user, followed by a structured overview of the rich array of resources to the digital designer in the form of drawing, painting and 3D applications, clipart, photolibraries, scanned images, digital photographs and new Internet sources.Part Two is structured in the form of a series of Workshop sessions. Each session explains in simple language the methods and techniques used to create the wide variety of over 300 graphic design examples included in the book. The examples are based on a wide range of popular PC and Mac applications, covering vector drawing, painting, scanning, photoeditng, use of special effect filters and the creation of 3D effects.Ken Pender is a freelance graphic arts professional. He has also worked for 25 years with IBM and was Manager of their European Computer Integrated Manufacturing Technology Centre in Germany.
New Subediting gives a detailed account of modern editing and production techniques. Its aim is both to help the young subeditor and to spell out to the newcomer to newspaper journalism what happens between the writing of news stories and features and their appearance in the newspaper when it comes off the press. In this age of technological change the quality of the subbing has never been more important to a successful newspaper. The careful use of typography, pictures, graphics and compelling headlines and the skillful handling of text coupled with good page planning, all help to give character,style and readability. This book examines, and draws lessons from, work in contemporary newspapers in editing and presentation; it defines the varied techniques of copytasting, of editing news stories and features, of styles of headline writing and the use of typography to guide and draw the attention of the reader. It takes into account developments in the use of English as a vehicle of mass communication in two important chapters on structure and word use; and it shows how to get the best out of the electronic tools now available to subeditors. It also reminds journalisis that, however advanced the tools, a newspaper is only as good as the creative skills of those that write, edit and put it together.
This book constitutes the refereed proceedings of the 7th International Conference on Health Information Science, HIS 2018, held in Cairns, QLD, Australia, in October 2018. The 13 full papers and 5 short papers presented were carefully reviewed and selected from 43 submissions. The papers feature multidisciplinary research results in health information science and systems that support health information management and health service delivery. They relate to all aspects of the conference scope, such as medical/health/biomedicine information resources such as patient medical records, devices and equipments, software and tools to capture, store, retrieve, process, analyze, and optimize the use of information in the health domain; data management, data mining, and knowledge discovery, management of public health, examination of standards, privacy and security issues; computer visualization and artificial intelligence for computer aided diagnosis; development of new architectures and applications for health information systems.
Harness the power of Adobe InDesign's data merge and style panel. Whether you're creating custom mail-outs or other mail-merge needs, familiarize yourself with this powerful InDesign panel in this in-depth, step-by-step guide. This book shows you how to easily create, edit, and print data merged documents that match specific branding and style guidelines. You'll learn how to combine MS Excel to create a faster workflow and quickly turn your Adobe InDesign CC 2017 files into printer-ready files. In this book, we'll also take a look at how to apply paragraph and character styles to your text and how you can alter formatting using Global Regular Expressions Print (GREPs). With Data Merge and Styles for Adobe InDesign CC 2017 as your guide, you'll see how to save time and money by learning all the peculiarities and powerful features of Adobe InDesign data merge. By the end of this book, you'll be able to streamline your workflow and avoid using MS Word's mail merge and back-and-forth edits. What You'll Learn Create custom print media with text styles using Adobe InDesign CC 2017 Work with GREPs in conjunction with Character and Paragraph Styles to customize data Build a numbering sequence for tickets Create single and multiple data merges Who This Book Is For Students, graphic designers, and corporate administrators who need to create documents for events.
Expand your skills for laying out and formatting documents and eBooks deployed for screen viewing on computers, tablets, and smart phones. The book covers how to add interactivity to reflowable and fixed layout eBooks, interactive PDF documents, and take advantage of Adobe's new Publish Online (Preview). Tips, techniques, and workarounds offer you a comprehensive view at adding interactivity to any kind of document and deploy them on social media and web sites. Learn essential skills for composing documents in Adobe InDesign, how to work with styles, format text and graphics, work with rich media, create multi-state objects, hyperlinks, and animations. What You'll Learn: Set up documents for interactive digital publishing Create Animations in InDesign Build and work with Multi-State Objects Host interactive documents on Facebook and other social media sites Who This Book Is For Graphic designers, book designers, and publishers
Beginning Scribus is the book you wish you'd read when you downloaded Scribus for the first time. Scribus is an award-winning page-layout program used by newspaper designers, magazine designers and those who want to do proper page layout but not pay for an expensive solution. It is free and Open Source, providing a useful alternative for those who cannot afford or choose not to use Adobe InDesign or QuarkXpress. Beginning Scribus provides you with the skills you will need in order to use this program productively. It demonstrates the techniques used by printers and publishers in order to create a range of layouts and effects, and it shows you how you can use these techniques to design everything from a flyer to a three-fold brochure. Using the latest Scribus release, Beginning Scribus takes you through the process of designing a magazine from start to finish and teaches you some of the tricks of professional page layout and design. The book also provides a definitive guide to desktop publishing using free, open source tools, such as GIMP for photo manipulation.
This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>
The book offers a detailed guide to temporal ordering, exploring open problems in the field and providing solutions and extensive analysis. It addresses the challenge of automatically ordering events and times in text. Aided by TimeML, it also describes and presents concepts relating to time in easy-to-compute terms. Working out the order that events and times happen has proven difficult for computers, since the language used to discuss time can be vague and complex. Mapping out these concepts for a computational system, which does not have its own inherent idea of time, is, unsurprisingly, tough. Solving this problem enables powerful systems that can plan, reason about events, and construct stories of their own accord, as well as understand the complex narratives that humans express and comprehend so naturally. This book presents a theory and data-driven analysis of temporal ordering, leading to the identification of exactly what is difficult about the task. It then proposes and evaluates machine-learning solutions for the major difficulties. It is a valuable resource for those working in machine learning for natural language processing as well as anyone studying time in language, or involved in annotating the structure of time in documents.
In this book, Harley Hahn demystifies Emacs for programmers, students, and everyday users. The first part of the book carefully creates a context for your work with Emacs. What exactly is Emacs? How does it relate to your personal need to work quickly and to solve problems? Hahn then explains the technical details you need to understand to work with your operating system, the various interfaces, and your file system. In the second part of the book, Hahn provides an authoritative guide to the fundamentals of thinking and creating within the Emacs environment. You start by learning how to install and use Emacs with Linux, BSD-based Unix, Mac OS X, or Microsoft Windows. Written with Hahn's clear, comfortable, and engaging style, Harley Hahn's Emacs Field Guide will surprise you: an engaging book to enjoy now, a comprehensive reference to treasure for years to come. What You Will Learn Special Emacs keys Emacs commands Buffers and windows Cursor, point, and region Kill/delete, move/copy, correcting, spell checking, and filling Searching, including regular expressions Emacs major modes and minor modes Customizing using your .emacs file Built-in tools, including Dired Games and diversions Who This Book Is For Programmers, students, and everyday users, who want an engaging and authoritative introduction to the complex and powerful Emacs working environment.
This book constitutes the thoroughly refereed proceedings of the 9th Russian Summer School on Information Retrieval, RuSSIR 2015, held in Saint Petersburg, Russia, in August 2015. The volume includes 5 tutorial papers, summarizing lectures given at the event, and 6 revised papers from the school participants. The papers focus on various aspects of information retrieval.
The objective of this monograph is to improve the performance of the sentiment analysis model by incorporating the semantic, syntactic and common-sense knowledge. This book proposes a novel semantic concept extraction approach that uses dependency relations between words to extract the features from the text. Proposed approach combines the semantic and common-sense knowledge for the better understanding of the text. In addition, the book aims to extract prominent features from the unstructured text by eliminating the noisy, irrelevant and redundant features. Readers will also discover a proposed method for efficient dimensionality reduction to alleviate the data sparseness problem being faced by machine learning model. Authors pay attention to the four main findings of the book : -Performance of the sentiment analysis can be improved by reducing the redundancy among the features. Experimental results show that minimum Redundancy Maximum Relevance (mRMR) feature selection technique improves the performance of the sentiment analysis by eliminating the redundant features. - Boolean Multinomial Naive Bayes (BMNB) machine learning algorithm with mRMR feature selection technique performs better than Support Vector Machine (SVM) classifier for sentiment analysis. - The problem of data sparseness is alleviated by semantic clustering of features, which in turn improves the performance of the sentiment analysis. - Semantic relations among the words in the text have useful cues for sentiment analysis. Common-sense knowledge in form of ConceptNet ontology acquires knowledge, which provides a better understanding of the text that improves the performance of the sentiment analysis.
This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
This book constitutes the thoroughly refereed post-conference proceedings of the 10th International Workshop on Graphics Recognition, GREC 2013, held in Bethlehem, PA, USA, in August 2013. The 20 revised full papers presented were carefully reviewed and selected from 32 initial submissions. Graphics recognition is a subfield of document image analysis that deals with graphical entities in engineering drawings, sketches, maps, architectural plans, musical scores, mathematical notation, tables, and diagrams. Accordingly the conference papers are organized in 5 topical sessions on symbol spotting and retrieval, graphics recognition in context, structural and perceptual based approaches, low level processing, and performance evaluation and ground truthing.
"Practical LaTeX" covers the material that is needed for everyday LaTeX documents. This accessible manual is friendly, easy to read, and is designed to be as portable as LaTeX itself. A short chapter, "Mission Impossible," introduces LaTeX
documents and presentations. Read these 30 pages; you then should
be able to compose your own work in LaTeX. The remainder of the
book delves deeper into the topics outlined in "Mission Impossible"
while avoiding technical subjects. Chapters on presentations and
illustrations are a highlight, as is the introduction of LaTeX on
an iPad. Amazon.com, Best of 2000, Editors Choice Review of Astronomical Tools
Data mining, an interdisciplinary field combining methods from artificial intelligence, machine learning, statistics and database systems, has grown tremendously over the last 20 years and produced core results for applications like business intelligence, spatio-temporal data analysis, bioinformatics, and stream data processing. The fifteen contributors to this volume are successful and well-known data mining scientists and professionals. Although by no means an exhaustive list, all of them have helped the field to gain the reputation and importance it enjoys today, through the many valuable contributions they have made. Mohamed Medhat Gaber has asked them (and many others) to write down their journeys through the data mining field, trying to answer the following questions: 1. What are your motives for conducting research in the data mining field? 2. Describe the milestones of your research in this field. 3. What are your notable success stories? 4. How did you learn from your failures? 5. Have you encountered unexpected results? 6. What are the current research issues and challenges in your area? 7. Describe your research tools and techniques. 8. How would you advise a young researcher to make an impact? 9. What do you predict for the next two years in your area? 10. What are your expectations in the long term? In order to maintain the informal character of their contributions, they were given complete freedom as to how to organize their answers. This narrative presentation style provides PhD students and novices who are eager to find their way to successful research in data mining with valuable insights into career planning. In addition, everyone else interested in the history of computer science may be surprised about the stunning successes and possible failures computer science careers (still) have to offer.
This book constitutes the refereed proceedings of the 4th International Workshop on Controlled Natural Language, CNL 2014, held in Galway, Ireland, in August 2014. The 17 full papers and one invited paper presented were carefully reviewed and selected from 26 submissions. The topics include simplified language, plain language, formalized language, processable language, fragments of language, phraseologies, conceptual authoring, language generation, and guided natural language interfaces.
Delivering MPEG-4 Based Audio-Visual Services investigates the different aspects of end-to-end multimedia services; content creation, server and service provider, network, and the end-user terminal. Part I provides a comprehensive introduction to digital video communications, MPEG standards, and technologies, and deals with system level issues including standardization and interoperability, user interaction, and the design of a distributed video server. Part II investigates the systems in the context of object-based multimedia services and presents a design for an object-based audio-visual terminal, some of these features having been adopted by the MPEG-4 Systems specification. The book goes on to study the requirements for a file format to represent object-based audio-visual content and the design of one such format. The design introduces new concepts such as direct streaming that are essential for scalable servers. The final part of the book examines the delivery of object-based multimedia presentations and gives optimal algorithms for multiplex-scheduling of object-based audio-visual presentations, showing that the audio-visual object scheduling problem is NP-complete in the strong sense. The problem of scheduling audio-visual objects is similar to the problem of sequencing jobs on a single machine. The book compares these problems and adapts job-sequencing results to audio-visual object scheduling, and provides optimal algorithms for scheduling presentations under resource constraints, such as bandwidth (network constraints) and buffer (terminal constraints). In addition, the book presents algorithms that minimize the resources required for scheduling presentations and the auxiliary capacity required to support interactivity in object-based audio-visual presentations. Delivering MPEG-4 Based Audio-Visual Services is essential reading for researchers and practitioners in the areas of multimedia systems engineering and multimedia computing, network professionals, service providers, and all scientists and technical managers interested in the most up-to-date MPEG standards and technologies.
Automatic Indexing and Abstracting of Document Texts summarizes the latest techniques of automatic indexing and abstracting, and the results of their application. It also places the techniques in the context of the study of text, manual indexing and abstracting, and the use of the indexing descriptions and abstracts in systems that select documents or information from large collections. Important sections of the book consider the development of new techniques for indexing and abstracting. The techniques involve the following: using text grammars, learning of the themes of the texts including the identification of representative sentences or paragraphs by means of adequate cluster algorithms, and learning of classification patterns of texts. In addition, the book is an attempt to illuminate new avenues for future research. Automatic Indexing and Abstracting of Document Texts is an excellent reference for researchers and professionals working in the field of content management and information retrieval.
Renommierte Gestalter verschiedenster Disziplinen stellen
unterschiedlichste Positionen zur Gestaltung vor. Die sehr
personlichen Gesprache mit national und international bekannten
Designern geben Ihnen einen spannenden Einblick in die
facettenreiche Design-Diskussion der 90er Jahre.
This book constitutes the refereed proceedings of the Second International Workshop on Controlled Natural Language, CNL 2010, held in Marettimo Island, Italy, in September 2010. The 9 revised papers presented in this volume, together with 1 tutorial, were carefully reviewed and selected from 17 initial submissions. They broadly cover the field of controlled natural language, stressing theoretical and practical aspects of CNLs, relations to other knowledge representation languages, tool support, and applications.
This book constitutes the refereed proceedings of the 11th International Conference on Intelligent Tutoring Systems, ITS 2012, held in Chania, Crete, Greece, in June 2012. The 28 revised full papers, 50 short papers, and 56 posters presented were carefully viewed and selected from 177 submissions. The specific theme of the ITS 2012 conference is co-adaption between technologies and human learning. Besides that, the highly interdisciplinary ITS conferences bring together researchers in computer science, informatics, and artificial intelligence on the one side - and cognitive science, educational psychology, and linguistics on the other side. The papers are organized in topical sections on affect/emotions, affect/signals, games/motivation and design, games/empirical studies, content representation, feedback, non conventional approaches, conceptual content representation, assessment constraints, dialogue, dialogue/questions, learner modeling, learning detection, interaction strategies for games, and empirical studies thereof in general. |
You may like...
Linked Data in Linguistics…
Christian Chiarcos, Sebastian Nordhoff, …
Hardcover
R1,570
Discovery Miles 15 700
The SGML Implementation Guide - A…
Brian E. Travis, Dale C Waldt
Paperback
R1,638
Discovery Miles 16 380
Multilingual Information Retrieval…
Carol Peters, Martin Braschler, …
Hardcover
R1,572
Discovery Miles 15 720
Computer Analysis of Images and Patterns…
Ainhoa Berciano, Daniel Diaz-Pernil, …
Paperback
R1,652
Discovery Miles 16 520
Computer Analysis of Images and Patterns…
Ainhoa Berciano, Daniel Diaz-Pernil, …
Paperback
R1,664
Discovery Miles 16 640
Inductive Inference for Large Scale Text…
Catarina Silva, Bernadete Ribeiro
Hardcover
R5,352
Discovery Miles 53 520
Adobe Acrobat 6 - The Professional…
Donna L. Baker, Tom Carson
Paperback
|