Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Reference & Interdisciplinary > Library & information sciences
The first edition of this handbook appeared in 1996 and dealt with academic libraries. It gained wide acceptance and was translated into five other languages. After ten years the new edition widens the perspective to public libraries and adds indicators for electronic services and cost-effectiveness. The handbook has been considerably enlarged, from 17 to 40 indicators. It gives practical help by showing examples of possible results for each indicator. The handbook is intended as practical instrument for the evaluation of library services. Although it aims specifically at academic and public libraries, most indicators will also apply to all other types of libraries.
Creating Digital Exhibits for Cultural Institutions will show you how to create digital exhibits and experiences for your users that will be informative, accessible and engaging. Illustrated with real-world examples of digital exhibits from a range of GLAMs, the book addresses the many analytical aspects and practical considerations involved in the creation of such exhibits. It will support you as you go about: analyzing content to find hidden themes, applying principles from the museum exhibit literature, placing your content within internal and external information ecosystems, selecting exhibit software, and finding ways to recognize and use your own creativity. Demonstrating that an exhibit provides a useful and creative connecting point where your content, your organization, and your audience can meet, the book also demonstrates that such exhibits can provide a way to revisit difficult and painful material in a way that includes frank and enlightened analyses of issues such as racism, colonialism, sexism, class, and LGBTQI+ issues. Creating Digital Exhibits for Cultural Institutions is an essential resource for librarians, archivists, and other cultural heritage professionals who want to promote their institution's digital content to the widest possible audience. Academics and students working in the fields of library and information science, museum studies and digital humanities will also find much to interest them within the pages of this book.
The growth of data-collecting goods and services, such as ehealth and mhealth apps, smart watches, mobile fitness and dieting apps, electronic skin and ingestible tech, combined with recent technological developments such as increased capacity of data storage, artificial intelligence and smart algorithms, has spawned a big data revolution that has reshaped how we understand and approach health data. Recently the COVID-19 pandemic has foregrounded a variety of data privacy issues. The collection, storage, sharing and analysis of health- related data raises major legal and ethical questions relating to privacy, data protection, profiling, discrimination, surveillance, personal autonomy and dignity. This book examines health privacy questions in light of the General Data Protection Regulation (GDPR) and the general data privacy legal framework of the European Union (EU). The GDPR is a complex and evolving body of law that aims to deal with several technological and societal health data privacy problems, while safeguarding public health interests and addressing its internal gaps and uncertainties. The book answers a diverse range of questions including: What role can the GDPR play in regulating health surveillance and big (health) data analytics? Can it catch up with internet-age developments? Are the solutions to the challenges posed by big health data to be found in the law? Does the GDPR provide adequate tools and mechanisms to ensure public health objectives and the effective protection of privacy? How does the GDPR deal with data that concern children's health and academic research? By analysing a number of diverse questions concerning big health data under the GDPR from various perspectives, this book will appeal to those interested in privacy, data protection, big data, health sciences, information technology, the GDPR, EU and human rights law.
The International Federation of Library Associations and Institutions (IFLA) is the leading international body representing the interests of library and information services and their users. It is the global voice of the information profession. The series IFLA Publications deals with many of the means through which libraries, information centres, and information professionals worldwide can formulate their goals, exert their influence as a group, protect their interests, and find solutions to global problems.
Since the foundations of international cataloguing standards were laid in 1971, a host of unforeseen factors have had a dramatic impact on libraries, forcing them to rethink their cataloguing policy. The automated processing of bibliographic data has become commonplace, while new modes of electronic publishing are developed every day. The rise of databases compiled on an international scale raises the problem of how to create codes and systems capable of being used in all countries concerned. Finally, financial pressures have forced many libraries to do more "minimal level" cataloguing to keep pace with the growth of publishing output. Adopting a user-focused approach, this study systematically defines what information library patrons and staff, publishers, distributors, and retailers expect to find. The wide range of contexts in which data is used -- from purchasing, cataloguing, and interlibrary loan to reference and preservation -- receives careful consideration. The model set forth here will serve as a welcome starting point to those charged with designing cataloguing codes and systems to suit our constantly evolving information environment.
Operational information management is at a crossroads as it sheds the remaining vestiges of its paper-based processes and moves through the uncharted domain of electronic data processes. The final outcome is not yet in full focus, but real progress has been made in the transition to electronic documents providing the aviation industry with a clear direction. This book looks at a combination of industry initiatives and airline successes that point to the next steps that operators can take as they transition to fully integrated information management systems. Although the route has not been fully identified, it is evident that a key to successful long-term efficient information management is industry-wide cooperation. The chapters are authored by a range of experts in operational information management, and collectively, they outline ways that operators can improve efficiency across flight, ground and maintenance operations. Considerations and recommendations are identified and presented addressing the following priorities: Safety-critical information and procedures Human factors Information security Operational information standardization. The readership includes: Airline flight operations managers and standards personnel, Airline operating documents and publication specialists, Airline information managers, Commercial pilots, Airline maintenance managers and personnel, Manufacturers and vendors of aviation products, Aviation regulators and policy makers, Aviation researchers and developers of information technologies, and Military technical publications specialists.
This book gives a unique view of the current hot topic of continuing professional development/lifelong learning in the information services environment. It aims to provide the reader with guidelines for conceptualising, designing and measuring successful programmes for professional learning, staff development and professional growth in the organization.
Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry.
Knowledge Management was the theme of the Standing Conference of Eastern, Central and Southern African Library and Information Associations (SCECSAL XVII) in 2006. This selection of conference papers provides a cross-disciplinary approach to knowledge, information and development and how the three together can mould a new and more informed society. The challenge is to make our libraries more people-centered and Afro-centric, not simply serving the interests of the elite and paying little attention to the plight of the less well off. This needs to change, with libraries becoming more inclusive and serving the needs of all. These papers raise provocative questions, and provide an insight into the struggle of information services in this part of Africa to be part of an emerging information and knowledge society.
Using database-driven web pages or web content management (WCM) systems to manage increasingly diverse web content and to streamline workflows is a commonly practiced solution recognized in libraries to-day. However, limited library web content management models and funding constraints prevent many libraries from purchasing commercially available WCM systems. And, the lack of much needed technical expertise in building in-house WCM systems presents a great challenge for libraries of all types. Content and Workflow Management for Library Websites: Case Studies provides practical and applicable web content management solutions through case studies. It contains successful database-to-web applications as employed in a variety of academic libraries. The applications vary in scope and cover a range of practical how-to-do-it examples from database-driven web development, locally created web content management systems, systems for distributing content management responsibilities, dynamic content delivery, to open source tools, such as MySQL and PHP to manage the content. Issues and challenges associated with the development process are discussed. Authors will also discuss detours, sand traps, and missteps necessary to a real learning process.
Online Searching prepares students in library and information science programs to assist information seekers at all levels, from university faculty to elementary school students. Included in the third edition are interviews with librarians and other information professionals whose words of wisdom broaden graduate students' perspectives regarding online searching in a variety of work settings serving different kinds of information seekers. The book's chapters are organized according to the steps in the search process: 1. Conducting a reference interview to determine what the seeker wants 2. Identifying sources that are likely to produce relevant information for the seeker's query 3. Determining whether the user seeks a known item or information about a subject 4. Dividing the query into main ideas and combining them logically 5. Representing the query as input to the search system 6. Conducting the search and responding strategically 7. Displaying retrievals, assessing them, and responding tactically A new chapter on web search engines builds on students' existing experience with keyword searching and relevance ranking by introducing them to more sophisticated techniques to use in the search box and on the results page. A completely revised chapter on assessing research impact discusses the widespread use of author and article iMetrics, a trend that has developed rapidly since the publication of the second edition. More than 100 figures and tables provide readers with visualizations of concepts and examples of real searches and actual results. Textboxes offer additional topical details and professional insights. New videos supplement the text by delving more deeply into topics such as database types, information organization, specialized search techniques, results filtering, and the role of browsing in the information seeking process. An updated glossary makes it easy to find definitions of terms used throughout the book. With new and updated material, this edition of Online Searching gives students knowledge and skills for success when intermediating between information seekers and the sources they need.
Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
Countries around the globe are grappling with issues of archive legislation -- both in established societies where old laws no longer respond to modern realities and in the growing number of new states seeking to establish their own legal framework. These two volumes outline the progress and procedures developed
by nations grappling with issues such as: The reports presented here reveal common elements that may be fruitfully addressed by international effort and will act as a sourcebook of ideas and action.
This book examines the digitalization of longstanding problems of technological advance that produce inequalities and automated governance, which relieves subjects of agency and critical thought, and prompts a need to weaponize thoughtfulness against technocratic designs. The book situates digital-era problems relative to those of previous sociotechnical milieux and argues that technical advance perennially embeds corrosive effects on social relations and relations of production, recognizing variation across contexts and relative to entrenched societal hierarchies of race and other axes of difference and their intersections. Societal tolerance, despite abundant evidence for harmful effects of digital technologies, requires attention. The book explains blindness to social injustice by technocratic thinking delivered through education as well as truths embraced in the data sciences coupled with governance in universities and the private sector that protect these truths from critique. Institutional inertia suggests benefits of communitarianism, which strives for change emanating from civil society. Scaling postcapitalist communitarian values through communitybased peer production presents opportunities. However, enduring problems require critical reflection, continual revision of strategies, and active participation among diverse community citizens. This book is written with critical geographic sensibilities for an interdisciplinary audience of scholars and graduate and undergraduate students in the social sciences, humanities, and data sciences.
Ernst Mach (1838-1916) was a seminal philosopher-scientist and a deserving member of the canon of major twentieth-century thinkers. Yet, despite a healthy resurgence in Mach studies, he is still widely thought to represent a simplistic positivist, even sensationalist, position that does not at all reflect the depth of Mach's interests and subtlety as a philosopher. By exploring Mach's views on science as well as philosophy, this book attempts to wrest him free from his customary association with logical positivism and to reinterpret him on his own terms as a natural philosopher and naturalist about human knowledge. Mach's development and his influences from 19th century German philosophy and science are probed in great conceptual and historical detail, and attention is paid to his unpublished Nachlass as well as to the affinities between Mach's thought and that of other major philosopher-scientists such as Einstein, Bertrand Russell, William James, Helmholtz, Riemann, Herbart and Kant. In particular, the book strives to set forth the true nature of Mach's sensation-elements, the motivations for his critique of the concepts of space and time in physics, and the real meaning of his famous critique of metaphysics. The author's work has appeared in Synthese, Kant-Studien, Studies in History and Philosophy of Modern Physics and the Journal of the History of the Behavioral Sciences, but here these inquiries are gathered into a unified historico-critical treatment that follows Mach's conceptual development and the culmination of his work in a unique and intriguing natural philosophy. Physicists, psychologists, philosophers of science, historians of twentieth-century thought and culture, and educators will find this volume a valuable help in interpreting Mach's ideas in a context that includes philosophy and science and the bridge between them."
This volume contains the proceedings of a special conference held in Florence, August 2009. The theoretical and methodological aspects of rethinking semantic access to information and knowledge are explored. Innovative projects deployed to cope with the challenges of the future are presented and discussed. This book offers a unique opportunity for librarians and other information professionals to get acquainted with the state of the art in subject indexing.
This volume comprises contributions of three conferences, on legal deposit in a digital environment, on web harvesting and archiving as well as newspapers in the geographical context of the Mediterranean. The main focus is on how to acquire, preserve and make available digital files. Issues that continue to be hot topics also in a world dominated by monographs. |
You may like...
A Catalogue of Some of the Rarer Books…
Charles E. S. (Charles Edward Chambers
Hardcover
R756
Discovery Miles 7 560
English Heraldic Book-stamps, Figured…
Cyril 1848-1941 Davenport
Hardcover
R1,013
Discovery Miles 10 130
The Subjects of Literary and Artistic…
Enrico Bonadio, Cristiana Sappa
Hardcover
R3,238
Discovery Miles 32 380
Research Anthology on Applying Social…
Information R Management Association
Hardcover
R9,516
Discovery Miles 95 160
|