Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Word processing software > General
Updated for Excel 2021 and based on the bestselling editions from previous versions, Excel 2021 / Microsoft 365 Programming by Example is a practical, how-to book on Excel programming, suitable for readers already proficient with the Excel user interface. If you are looking to automate Excel routine tasks, this book will progressively introduce you to programming concepts via numerous illustrated hands-on exercises. More advanced topics are demonstrated via custom projects. From recording and editing a macro and writing VBA code from scratch to programming the Ribbon interface and working with XML documents, this book takes you on a programming journey that will change the way you work with Excel. The book provides information on performing automatic operations on files, folders, and other Microsoft Office applications. It also covers proper use of event procedures, testing and debugging, and guides you through programming more advanced Excel features, such as working with VBA classes and raising your own events in standalone class modules. Includes companion files with source code, hands-on projects, and figures.
Whether you are a beginner or experienced user, learn about new features in this version or discover and use some of Word's functions for the first time. Joan Lambert, author of multiple books on the Microsoft Office Suite, creator of many Lynda.com videos and an experienced corporate trainer used her experience and knowledge to cover the most relevant functions for users at different levels. Suggested uses: Workplace -- flat for easy storage and access at a moments notice to find a function you need to use, or to jog your memory for a function you do not use often; Company Training -- reduce help-desk calls and keep productivity flowing for a team or for your entire company; Students/Teachers/Parents -- help with the learning curve in a classroom or for your child and any projects requiring Word; College Students -- make sure you are using features that can make your life easier.
Vim is a fast and efficient text editor that will make you a faster and more efficient developer. It's available on almost every OS, and if you master the techniques in this book, you'll never need another text editor. In more than 120 Vim tips, you'll quickly learn the editor's core functionality and tackle your trickiest editing and writing tasks. This beloved bestseller has been revised and updated to Vim 7.4 and includes two brand-new tips and five fully revised tips.A highly configurable, cross-platform text editor, Vim is a serious tool for programmers, web developers, and sysadmins who want to raise their game. No other text editor comes close to Vim for speed and efficiency; it runs on almost every system imaginable and supports most coding and markup languages.Learn how to edit text the "Vim way": complete a series of repetitive changes with The Dot Formula using one keystroke to strike the target, followed by one keystroke to execute the change. Automate complex tasks by recording your keystrokes as a macro. Discover the "very magic" switch that makes Vim's regular expression syntax more like Perl's. Build complex patterns by iterating on your search history. Search inside multiple files, then run Vim's substitute command on the result set for a project-wide search and replace. All without installing a single plugin! Two new tips explain how to run multiple ex commands as a batch and autocomplete sequences of words."Practical Vim, Second Edition" will show you new ways to work with Vim 7.4 more efficiently, whether you're a beginner or an intermediate Vim user. All this, without having to touch the mouse.What You Need: Vim version 7.4
This book deals with a topical issue relating to the use of script in Japan, one which has the potential to reshape future script policy through the mediation of both orthographic practices and social relations. It tells the story of the impact of one of the most significant technological breakthroughs in Japan in the latter part of this century: the invention and rapid adoption of word-processing technology capable of handling Japanese script in a society where the nature of that script had previously mandated handwriting as the norm. The ramifications of this technology in both the business and personal spheres have been wide-ranging, extending from changes to business practices, work profiles, orthography and social attitudes to writing through to Japan's ability to construct a substantial presence on the Internet in recent years.
The symposium on which this volume was based brought together approximately fifty scientists from a variety of backgrounds to discuss the rapidly-emerging set of competing technologies for exploiting a massive quantity of textual information. This group was challenged to explore new ways to take advantage of the power of on-line text. A billion words of text can be more generally useful than a few hundred logical rules, if advanced computation can extract useful information from streams of text and help find what is needed in the sea of available material. While the extraction task is a hot topic for the field of natural language processing and the retrieval task is a solid aspect in the field of information retrieval, these two disciplines came together at the symposium and have been cross-breeding more than ever. The book is organized in three parts. The first group of papers describes the current set of natural language processing techniques used for interpreting and extracting information from quantities of text. The second group gives some of the historical perspective, methodology, and current practice of information retrieval work; the third covers both current and emerging applications of these techniques. This collection of readings should give students and scientists alike a good idea of the current techniques as well as a general concept of how to go about developing and testing systems to handle volumes of text.
Master one of the most popular word processors ever with this essential, visual reference Teach Yourself VISUALLY: Word 2019 provides readers with a thorough and visual exploration of the 2019 edition of Microsoft Word. Written by the celebrated author of over 100 books on computing, Guy Hart-Davis, Teach Yourself VISUALLY: Word 2019 allows you to quickly get up to speed with one of the most popular word processors on the planet. The book covers all the topics you'll need to comprehensively master Word 2019, and includes: Full-color, step-by-step instructions showing you how to perform all the essential tasks of Microsoft Word 2019 How to set up and format documents, edit them, and add images and charts How to post documents online for sharing and reviewing and take advantage of all the newest features of Word Newly updated to include the latest features of Microsoft Word, like how to collaborate on documents in real time, draw and write with the digital pen, new accessibility options and the new Resume Assistant, Teach Yourself VISUALLY: Word 2019 belongs on the shelf of anyone who wants to improve their effectiveness with this essential word processor.
This open access book describes the results of natural language processing and machine learning methods applied to clinical text from electronic patient records. It is divided into twelve chapters. Chapters 1-4 discuss the history and background of the original paper-based patient records, their purpose, and how they are written and structured. These initial chapters do not require any technical or medical background knowledge. The remaining eight chapters are more technical in nature and describe various medical classifications and terminologies such as ICD diagnosis codes, SNOMED CT, MeSH, UMLS, and ATC. Chapters 5-10 cover basic tools for natural language processing and information retrieval, and how to apply them to clinical text. The difference between rule-based and machine learning-based methods, as well as between supervised and unsupervised machine learning methods, are also explained. Next, ethical concerns regarding the use of sensitive patient records for research purposes are discussed, including methods for de-identifying electronic patient records and safely storing patient records. The book's closing chapters present a number of applications in clinical text mining and summarise the lessons learned from the previous chapters. The book provides a comprehensive overview of technical issues arising in clinical text mining, and offers a valuable guide for advanced students in health informatics, computational linguistics, and information retrieval, and for researchers entering these fields.
In this book, Harley Hahn demystifies Emacs for programmers, students, and everyday users. The first part of the book carefully creates a context for your work with Emacs. What exactly is Emacs? How does it relate to your personal need to work quickly and to solve problems? Hahn then explains the technical details you need to understand to work with your operating system, the various interfaces, and your file system. In the second part of the book, Hahn provides an authoritative guide to the fundamentals of thinking and creating within the Emacs environment. You start by learning how to install and use Emacs with Linux, BSD-based Unix, Mac OS X, or Microsoft Windows. Written with Hahn's clear, comfortable, and engaging style, Harley Hahn's Emacs Field Guide will surprise you: an engaging book to enjoy now, a comprehensive reference to treasure for years to come. What You Will Learn Special Emacs keys Emacs commands Buffers and windows Cursor, point, and region Kill/delete, move/copy, correcting, spell checking, and filling Searching, including regular expressions Emacs major modes and minor modes Customizing using your .emacs file Built-in tools, including Dired Games and diversions Who This Book Is For Programmers, students, and everyday users, who want an engaging and authoritative introduction to the complex and powerful Emacs working environment.
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular expressions can be an outlet for creativity, for brilliant programming, and for the elegant solution. Regular Expression Pocket Reference offers an introduction to regular expressions, pattern matching, metacharacters, modes and constructs, and then provides separate sections for each of the language APIs, with complete regex listings including: Supported metacharacters for each language API Regular expression classes and interfaces for Ruby, Java, .NET, and C# Regular expression operators for Perl 5.8 Regular expression module objects and functions for Python Pattern-matching functions for PHP and the vi editor Pattern-matching methods and objects for JavaScript Unicode Support for each of the languages With plenty of examples and other resources, Regular Expression Pocket Reference summarizes the complex rules for performing this critical text-processing function, and presents this often-confusing topic in a friendly and well-organized format. This guide makes an ideal on-the-job companion.
TEX is the program for printing high quality mathematical text to which all others are compared. It is flexible enough to be used on many different computer architectures and operating systems ranging from microcomputers to mainframes. In a relatively short period of time it has become the standard tool for mathematical typesetting at practically all major universities. The versality of TEX has allowed it to be used in a wide variety of applications; for example, it is used for publishing scholarly journals which adhere to the highest typesetting standards, and also to publish student papers and theses. This book is designed for the complete newcomer to TEX. It starts by showing how to typeset simple text that mostly uses the defaults predefined by TEX. By use of graded exercises, the situations covered slowly become more complex and include many different types of mathematical constructions and tables. In the end it is possible to handle almost any standard mathematical situation. The different tables presented in this book allow it to be used as a quick reference. The similar features of TEX are gathered together whenever possible to give an overview that is a good foundation forbecoming more proficient and for doing more creative typesetting. This book can be used either as a tool to learn just enough TEX to write standardmathematical papers of modest complexity or as a building block to prepare for more ambitious typesetting projects.
No matter what you want to write, Scrivener makes it easier. Whether you're a planner, a seat-of-the-pants writer, or something in between, Scrivener provides tools for every stage of the writing process. "Scrivener For Dummies" walks you step-by-step through this popular writing software's best features. This friendly "For Dummies " guide starts with the basics, but even experienced scriveners will benefit from the helpful tips for getting more from their favourite writing software.Walks you through customizing project templates for your project needsOffers useful advice on compiling your project for print and e-book formats Helps you set up project and document targets and minimize distractions to keep you on track and on deadlineExplains how to storyboard with the corkboard, create collections, and understand their valueShows you how to use automated backups to protect your hard work along the way From idea inception to manuscript submission, "Scrivener for Dummies" makes it easier than ever to plan, write, organize, and revise your masterpiece in Scrivener.
Artificial intelligence has been utilized in a diverse range of industries as more people and businesses discover its many uses and applications. A current field of study that requires more attention, as there is much opportunity for improvement, is the use of artificial intelligence within literary works and social media analysis. Artificial Intelligence Applications in Literary Works and Social Media presents contemporary developments in the adoption of artificial intelligence in textual analysis of literary works and social media and introduces current approaches, techniques, and practices in data science that are implemented to scrap and analyze text data. This book initiates a new multidisciplinary field that is the combination of artificial intelligence, data science, social science, literature, and social media study. Covering key topics such as opinion mining, sentiment analysis, and machine learning, this reference work is ideal for computer scientists, industry professionals, researchers, scholars, practitioners, academicians, instructors, and students.
The quick way to learn Word for Office 365! This is learning made easy. Get more done quickly with Word for Office 365. Jump in wherever you need answers -- brisk lessons and informative screenshots show you exactly what to do, step by step. Create great-looking, well-organized documents to enhance communication Use headings, bookmarks, and footnotes for more intuitive access to knowledge Visualize information by using diagrams and charts Illustrate concepts by using 3D models, icons, and screen clippings Collaborate, track changes, and coauthor documents in real-time Enforce security and privacy in electronic documents Quickly build tables of contents, indexes, and equations Generate individualized emails, letters, labels, envelopes, directories, and catalogs Supercharge efficiency with custom styles, themes, templates, and building blocks Look up just the tasks and lessons you need
The six volume set LNCS 10634, LNCS 10635, LNCS 10636, LNCS 10637, LNCS 10638, and LNCS 10639 constitues the proceedings of the 24rd International Conference on Neural Information Processing, ICONIP 2017, held in Guangzhou, China, in November 2017. The 563 full papers presented were carefully reviewed and selected from 856 submissions. The 6 volumes are organized in topical sections on Machine Learning, Reinforcement Learning, Big Data Analysis, Deep Learning, Brain-Computer Interface, Computational Finance, Computer Vision, Neurodynamics, Sensory Perception and Decision Making, Computational Intelligence, Neural Data Analysis, Biomedical Engineering, Emotion and Bayesian Networks, Data Mining, Time-Series Analysis, Social Networks, Bioinformatics, Information Security and Social Cognition, Robotics and Control, Pattern Recognition, Neuromorphic Hardware and Speech Processing.
The six volume set LNCS 10634, LNCS 10635, LNCS 10636, LNCS 10637, LNCS 10638, and LNCS 10639 constituts the proceedings of the 24rd International Conference on Neural Information Processing, ICONIP 2017, held in Guangzhou, China, in November 2017. The 563 full papers presented were carefully reviewed and selected from 856 submissions. The 6 volumes are organized in topical sections on Machine Learning, Reinforcement Learning, Big Data Analysis, Deep Learning, Brain-Computer Interface, Computational Finance, Computer Vision, Neurodynamics, Sensory Perception and Decision Making, Computational Intelligence, Neural Data Analysis, Biomedical Engineering, Emotion and Bayesian Networks, Data Mining, Time-Series Analysis, Social Networks, Bioinformatics, Information Security and Social Cognition, Robotics and Control, Pattern Recognition, Neuromorphic Hardware and Speech Processing.
The six volume set LNCS 10634, LNCS 10635, LNCS 10636, LNCS 10637, LNCS 10638, and LNCS 10639 constitues the proceedings of the 24rd International Conference on Neural Information Processing, ICONIP 2017, held in Guangzhou, China, in November 2017. The 563 full papers presented were carefully reviewed and selected from 856 submissions. The 6 volumes are organized in topical sections on Machine Learning, Reinforcement Learning, Big Data Analysis, Deep Learning, Brain-Computer Interface, Computational Finance, Computer Vision, Neurodynamics, Sensory Perception and Decision Making, Computational Intelligence, Neural Data Analysis, Biomedical Engineering, Emotion and Bayesian Networks, Data Mining, Time-Series Analysis, Social Networks, Bioinformatics, Information Security and Social Cognition, Robotics and Control, Pattern Recognition, Neuromorphic Hardware and Speech Processing.
This book presents direct and concise explanations and examples to many LaTeX syntax and structures, allowing students and researchers to quickly understand the basics that are required for writing and preparing book manuscripts, journal articles, reports, presentation slides and academic theses and dissertations for publication. Unlike much of the literature currently available on LaTeX, which takes a more technical stance, focusing on the details of the software itself, this book presents a user-focused guide that is concerned with its application to everyday tasks and scenarios. It is packed with exercises and looks at topics like formatting text, drawing and inserting tables and figures, bibliographies and indexes, equations, slides, and provides valuable explanations to error and warning messages so you can get work done with the least time and effort needed. This means LaTeX in 24 Hours can be used by students and researchers with little or no previous experience with LaTeX to gain quick and noticeable results, as well as being used as a quick reference guide for those more experienced who want to refresh their knowledge on the subject.
Marco und Andreas OEchsner geben eine kompakte und beispielbasierende Einfuhrung in die Erstellung professioneller Textdokumente mittels LaTeX. Sie fuhren die wichtigsten Elemente eines wissenschaftlichen Dokumentes in einfacher und verstandlicher Weise ein und zeigen deren Umsetzung in der Makrosprache LaTeX. Ziel ist die Fokussierung auf die grundlegenden Befehle und deren Anwendung zur Gestaltung qualitativ hochwertiger Textlayouts. Hinweise zum Bezug und zur Installation von LaTeX vervollstandigen die Zusammenstellung.
The story of writing in the digital age is every bit as messy as the ink-stained rags that littered the floor of Gutenberg's print shop or the hot molten lead of the Linotype machine. During the period of the pivotal growth and widespread adoption of word processing as a writing technology, some authors embraced it as a marvel while others decried it as the death of literature. The product of years of archival research and numerous interviews conducted by the author, Track Changes is the first literary history of word processing. Matthew Kirschenbaum examines how the interests and ideals of creative authorship came to coexist with the computer revolution. Who were the first adopters? What kind of anxieties did they share? Was word processing perceived as just a better typewriter or something more? How did it change our understanding of writing? Track Changes balances the stories of individual writers with a consideration of how the seemingly ineffable act of writing is always grounded in particular instruments and media, from quills to keyboards. Along the way, we discover the candidates for the first novel written on a word processor, explore the surprisingly varied reasons why writers of both popular and serious literature adopted the technology, trace the spread of new metaphors and ideas from word processing in fiction and poetry, and consider the fate of literary scholarship and memory in an era when the final remnants of authorship may consist of folders on a hard drive or documents in the cloud.
Like to build websites in the wild with your MacBook? This concise hands-on guide introduces you to the ideal editor: Coda 2. Rather than clutter your screen with shell access, a separate CSS editor, and a version control app, you'll discover how Coda's "one-window web development" bundles everything into one neat application. Take Coda on a trial run, then learn step-by-step how to configure each feature to fit your working style. You'll find out firsthand how Coda will save you time and effort on your next project. Get to know Coda's workflow by building a sample site Delve into features such as the tab bar, path bar, sidebar, and Sites view Set up your own development environment - and dig deeper into the editor's options Get tips for taking full advantage of the text and MySQL editors Create a Git or Subversion repository for source control management Learn the finer points of sharing project documents across a network Discover the built-in reference books, and learn how to extend Coda
This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access: - information extraction and retrieval; - text classification and clustering; - opinion mining; - comprehension aids (automatic summarization, machine translation, visualization). In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications concerned, by highlighting the relationship between models and applications and by illustrating the behavior of each model on real collections. Textual Information Access is organized around four themes: informational retrieval and ranking models, classification and clustering (regression logistics, kernel methods, Markov fields, etc.), multilingualism and machine translation, and emerging applications such as information exploration. Contents Part 1: Information Retrieval 1. Probabilistic Models for Information Retrieval, Stephane Clinchant and Eric Gaussier. 2. Learnable Ranking Models for Automatic Text Summarization and Information Retrieval, Massih-Reza Amini, David Buffoni, Patrick Gallinari,& Tuong Vinh Truong and Nicolas Usunier. Part 2: Classification and Clustering 3. Logistic Regression and Text Classification, Sujeevan Aseervatham, Eric Gaussier, Anestis Antoniadis,& Michel Burlet and Yves Denneulin. 4. Kernel Methods for Textual Information Access, Jean-Michel Renders. 5. Topic-Based Generative Models for Text & Information Access, Jean-Cedric Chappelier. 6. Conditional Random Fields for Information Extraction, Isabelle Tellier and Marc Tommasi. Part 3: Multilingualism 7. Statistical Methods for Machine Translation, Alexandre Allauzen and Francois Yvon. Part 4: Emerging Applications 8. Information Mining: Methods and Interfaces for Accessing Complex Information, Josiane Mothe, Kurt Englmeier and Fionn Murtagh. 9. Opinion Detection as a Topic Classification Problem, Juan-Manuel Torres-Moreno, Marc El-Beze, Patrice Bellot and& Frederic Bechet.
Pragnant und praxisorientiert erfahren Sie hier, auf welchen zentralen Prinzipien DITA beruht. Die wichtigsten DITA-Features werden anhand einfacher Beispiele erklart, die direkt auf die eigene Umgebung ubertragbar sind. Damit ist dieses essential ein guter Einstieg fur alle, die DITA noch nicht kennen, und ideal als erste Entscheidungshilfe, wenn es um die Optimierung einer Informationslandschaft geht.
|
You may like...
Sport - a Stage for Life: How to Connect…
Cristiana Pinciroli
Paperback
Momstrology - The Astrotwins' Guide to…
Ophira Edut, Tali Edut
Paperback
|