![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Physics > General
"This unbeatable CGP Student Book covers all of the core content for both years of AQA A-Level Physics - plus the optional topics 9-12. It's brimming with in-depth, accessible notes, clear diagrams, photographs, tips and worked examples. Throughout the book there are lots of practice questions and end of section summaries with exam-style questions (answers at the back). There's detailed guidance on Maths Skills and Practical Skills, as well as indispensable advice for success in the final exams. We've even thrown in a free Online Edition of the whole book - just use the code printed inside the book to access it on your PC, Mac or tablet. If you'd prefer Year 1 (9781782943235) & Year 2 (9781782943280) in separate books, CGP has them too! And for more detailed coverage of the mathematical elements of A-Level Physics, try our Essential Maths Skills book (9781782944713)! "
This book advocates the importance and value of errors for the progress of scientific research! Hans Kricheldorf explains that most of the great scientific achievements are based on an iterative process (an 'innate self-healing mechanism'): errors are committed, being checked over and over again, through which finally new findings and knowledge can arise. New ideas are often first confronted with refusal. This is so not only in real life, but also in scientific and medical research. The author outlines in this book how great ideas had to ripen over time before winning recognition and being accepted. The book showcases in an entertaining way, but without schadenfreude, that even some of the most famous discoverers may appear in completely different light, when regarding errors they have committed in their work. This book is divided into two parts. The first part creates a fundament for the discussion and understanding by introducing important concepts, terms and definitions, such as (natural) sciences and scientific research, laws of nature, paradigm shift, and progress (in science). It compares natural sciences with other scientific disciplines, such as historical research or sociology, and examines the question if scientific research can generate knowledge of permanent validity. The second part contains a collection of famous fallacies and errors from medicine, biology, chemistry, physics and geology, and how they were corrected. Readers will be astonished and intrigued what meanders had to be explored in some cases before scientists realized facts, which are today's standard and state-of-the-art of science and technology. This is an entertaining and amusing, but also highly informative book not only for scientists and specialists, but for everybody interested in science, research, their progress, and their history!
Sir Charles Wheatstone (1802 75) was a shoemaker's son whose fascination with physics led him to become one of the most celebrated scientists and inventors of his time. Apprenticed to his uncle, a musical instrument manufacturer, Wheatstone studied the physics of sound, publishing his first scientific paper in 1823. He was the chief developer of telegraphy, inventing increasingly advanced instruments for transmitting and receiving information. Telegraphy revolutionized communication in the Victorian era, eventually making almost instantaneous global communication possible. This collection of Wheatstone's works, first published in 1879, spans his entire career and includes fully illustrated details of many of his pioneering inventions. His broad-ranging research led to numerous important advances; those in telegraphy and cryptography were still in military use as late as the Second World War. This collection is a valuable source for the history of science, and a fitting tribute to Wheatstone's 'industry and versatility'.
Renowned for its interactive focus on conceptual understanding, Halliday and Resnick's Principles of Physics, 12th edition, is an industry-leading resource in physics teaching with expansive, insightful, and accessible treatments of a wide variety of subjects. Focusing on several contemporary areas of research and a wide array of tools that support students' active learning, this book guides students through the process of learning how to effectively read scientific material, identify fundamental concepts, reason through scientific questions, and solve quantitative problems. This International Adaptation of the twelfth edition is built to be a learning center with practice opportunities, simulations, and videos. Numerous practice and assessment questions are available to ensure that students understand the problem-solving processes behind key concepts and understand their mistakes while working through problems.
Lie theory is a mathematical framework for encoding the concept of symmetries of a problem, and was the central theme of an INdAM intensive research period at the Centro de Giorgi in Pisa, Italy, in the academic year 2014-2015. This book gathers the key outcomes of this period, addressing topics such as: structure and representation theory of vertex algebras, Lie algebras and superalgebras, as well as hyperplane arrangements with different approaches, ranging from geometry and topology to combinatorics.
Aims at the research of a physics model based on an advanced thinking paradigm Presents deep-going solutions along with a richer connotation Contains fully analyzed models Deeply resolves and expands some classical physics models into novel questions Optimizes many existing solutions to the classical questions and even provides novel solutions to classic models and common questions Draws clearer, detailed, and exquisite schematic illustrations for all physical models
The last lecture course that Nobel Prize winner Richard P. Feynman gave at Caltech from 1983 to 1986 was not on physics but on computer science. The first edition of the Feynman Lectures on Computation published in 1996 and provided an overview of standard and not-so-standard topics in computer science given in Feynman's inimitable style. Although now over 20 years old, most of the material is still relevant and interesting, and Feynman's unique philosophy of learning and discovery shines through. For this new edition, Tony Hey has updated the lectures with an invited chapter from Professor John Preskill on "Quantum Computing 40 Years Later." This contribution captures the progress made towards building a quantum computer since Feynman's original suggestions in 1981. The last 25 years have also seen the "Moore's Law" roadmap for the IT industry coming to an end. To reflect this transition, John Shalf, Senior Scientist at Lawrence Berkeley National Laboratory, has contributed a chapter on "The Future of Computing Beyond Moore's Law." The final update for this edition capturea Feynman's interest in Artificial Intelligence and Artificial Neural Networks. Eric Mjolsness, now a professor of Computer Science at the University of California Irvine, was a Teaching Assistant for Feynman's original lecture course and his research interests are now in the application of Artificial Intelligence and Machine Learning for multi-scale science. He has contributed a chapter on "Feynman on Artificial Intelligence and Machine Learning" that captures the early discussions with Feynman and also looks towards future developments. This exciting and important work provides key reading for students and scholars in the fields of computer science and computational physics.
The scattering data of the considered inverse scattering problems (ISPs) are described completely. Solving the associated IVP or IBVP for the nonlinear evolution equations (NLEEs) is carried out step by step. Namely, the NLEE can be written as the compatibility condition of two linear equations. The inverse scattering method (ISM) to solving the IVPs or IBVPs for NLEEs is consistent. It is effectively embedded in the schema of the ISM. Application of ISM to solving the NLEEs is effectively embedded in the scheme of the ISM.
"Blurb & Contents" Frank von Hippel has been at the forefront of those scientists grappling with the troubled legacy of our Nuclear Age. Von Hippel offers insights about the choices we must make and how science can help us to make them. Topics include nuclear power, atomic weapons, disarmament, energy and the future of automobiles. The scientist's role in public life and the importance of "making trouble" is emphasized. Of interest to physicists, particularly those working in nuclear physics, policy makers, environmentalists and those concerned with nuclear disarmament and the role of science in society.
Over the course of his distinguished career, Vladimir Maz'ya has made a number of groundbreaking contributions to numerous areas of mathematics, including partial differential equations, function theory, and harmonic analysis. The chapters in this volume - compiled on the occasion of his 80th birthday - are written by distinguished mathematicians and pay tribute to his many significant and lasting achievements.
Special numerical techniques are already needed to deal with n x n matrices for large n. Tensor data are of size n x n x...x n=nd, where nd exceeds the computer memory by far. They appear for problems of high spatial dimensions. Since standard methods fail, a particular tensor calculus is needed to treat such problems. This monograph describes the methods by which tensors can be practically treated and shows how numerical operations can be performed. Applications include problems from quantum chemistry, approximation of multivariate functions, solution of partial differential equations, for example with stochastic coefficients, and more. In addition to containing corrections of the unavoidable misprints, this revised second edition includes new parts ranging from single additional statements to new subchapters. The book is mainly addressed to numerical mathematicians and researchers working with high-dimensional data. It also touches problems related to Geometric Algebra.
The prime goal of this monograph, which comprises a total of five volumes, is to derive sharp spectral asymptotics for broad classes of partial differential operators using techniques from semiclassical microlocal analysis, in particular, propagation of singularities, and to subsequently use the variational estimates in "small" domains to consider domains with singularities of different kinds. In turn, the general theory (results and methods developed) is applied to the Magnetic Schroedinger operator, miscellaneous problems, and multiparticle quantum theory. In this volume the general microlocal semiclassical approach is developed, and microlocal and local semiclassical spectral asymptotics are derived.
The concept of linearization stability arises when one compares the solutions to a linearized equation with solutions to the corresponding true equation. This requires a new definition of linearization stability adapted to Einstein's equation. However, this new definition cannot be applied directly to Einstein's equation because energy conditions tie together deformations of the metric and of the stress-energy tensor. Therefore, a background is necessary where the variables representing the geometry and the energy-matter are independent. This representation is given by a well-posed Cauchy problem for Einstein's field equation. This book establishes a precise mathematical framework in which linearization stability of Einstein's equation with matter makes sense. Using this framework, conditions for this type of stability in Robertson-Walker models of the universe are discussed.
Sir Geoffrey Ingram Taylor (1886 1975) was a physicist, mathematician and expert on fluid dynamics and wave theory. He is widely considered to be one of the greatest physical scientists of the twentieth century. Across these four volumes, published between the years 1958 and 1971, Batchelor has collected together almost 200 of Sir Geoffrey Ingram Taylor's papers. The papers of the first three volumes are grouped approximately by subject, with Volume IV collating a number of miscellaneous papers on the mechanics of fluids. Together, these volumes allow a thorough exploration of the breadth and diversity of Sir Taylor's interests within the field of fluid dynamics. At the end of Volume IV, Batchelor provides the reader with both a chronological list of the papers presented across all four volumes, and a list of Sir Geoffrey Taylor's other published articles, completing this truly invaluable research and reference work.
This book contains the papers presented at a NATO Advanced Research Institute on "Mediterranean Marine Ecosystems", held at Heraklion-Crete, Greece, from September 23-27, 1983. A workshop rather than a conference, it was sponsored by the Eco-Sciences Special Programme Panel, in cooperation with the Marine Science Panel. The third of its kind, it was scheduled in the framework of a project on a multidisciplinary integrated approach to the study of the Mediterranean. This Sea and the surrounding land was not only the cradle of many civilizations but is, up to the present time, one of the major world areas of marine traffic, communication and exchanges, fisheries and aquacultures, inshore human activities and *** pollu tion. To a certain degree it constitutes a gigantic natural labo ratory, where the fate of threatened aquatic and terrestrial eco systems including the human one, is tested. The Mediterranean Sea, with its geological history and present day geographic, hydrological and climatic conditions is believed to form an ecological entity. Important exchanges and mutual influences take place with the surrounding land area and the water masses, naturally (Atlantic, Black Sea) or artificially (Red Sea), connected to the Mediterranean. Therefore, a better and in-depth knowledge of the various ecosystems, benthic, planktonic and nektonic, neritic or pelagic, in the Western or the Eastern Basin seems to be a pre requisite to any action in preserving, upgrading and managing the natural resources of the area.
* Written by an interdisciplinary group of specialists from the arts, humanities and sciences at Oxford University * Suitable for a wide non-academic readership, and will appeal to anyone with an interest in mathematics, science and philosophy.
POWER GRID RESILIENCE AGAINST NATURAL DISASTERS How to protect our power grids in the face of extreme weather events The field of structural and operational resilience of power systems, particularly against natural disasters, is of obvious importance in light of climate change and the accompanying increase in hurricanes, wildfires, tornados, frigid temperatures, and more. Addressing these vulnerabilities in service is a matter of increasing diligence for the electric power industry, and as such, targeted studies and advanced technologies are being developed to help address these issues generally--whether they be from the threat of cyber-attacks or of natural disasters. Power Grid Resilience against Natural Disasters provides, for the first time, a comprehensive and systematic introduction to resilience-enhancing planning and operation strategies of power grids against extreme events. It addresses, in detail, the three necessary steps to ensure power grid success: the preparedness prior to natural disasters, the response as natural disasters unfold, and the recovery after the event. Crucially, the authors put forward state-of-the-art methods towards improving today's practices in managing these three arenas. Power Grid Resilience against Natural Disasters readers will also find: Data, tables, and illustrations to supplement and clarify the points put forward in each chapter Case studies on realistic power systems and industry standards and practices related to the topics covered Potential to be a supplementary text in advanced level power engineering courses Power Grid Resilience against Natural Disasters will be of interest to specialists and engineers, as well as planners and operators from industry. It can also be a useful resource for senior undergraduate students, postgraduate students, researchers, and research libraries. More, it will appeal to all readers with a strong background in power system analysis, operation and control, optimization methods, the Markov decision process, and probability and statistics.
These are the proceedings of the 19th international conference on domain decomposition methods in science and engineering. Domain decomposition methods are iterative methods for solving the often very large linear or nonlinear systems of algebraic equations that arise in various problems in mathematics, computational science, engineering and industry. They are designed for massively parallel computers and take the memory hierarchy of such systems into account. This is essential for approaching peak floating point performance. There is an increasingly well-developed theory which is having a direct impact on the development and improvement of these algorithms.
2D infrared (IR) spectroscopy is a cutting-edge technique, with applications in subjects as diverse as the energy sciences, biophysics and physical chemistry. This book introduces the essential concepts of 2D IR spectroscopy step-by-step to build an intuitive and in-depth understanding of the method. This unique book introduces the mathematical formalism in a simple manner, examines the design considerations for implementing the methods in the laboratory, and contains working computer code to simulate 2D IR spectra and exercises to illustrate involved concepts. Readers will learn how to accurately interpret 2D IR spectra, design their own spectrometer and invent their own pulse sequences. It is an excellent starting point for graduate students and researchers new to this exciting field. Computer codes and answers to the exercises can be downloaded from the authors' website, available at www.cambridge.org/9781107000056.
* Written by an interdisciplinary group of specialists from the arts, humanities and sciences at Oxford University * Suitable for a wide non-academic readership, and will appeal to anyone with an interest in mathematics, science and philosophy.
Special functions are essential for solving problems in virtually all engineering disciplines. Assuming only knowledge of elementary calculus and differential equations, this concise, clearly written reference illustrates the properties and applications of the special functions most frequently needed by practising engineers. Copious illustrations of worked out sample problems from a wide range of real-world engineering applications distinguish this work from others.
This book contains a collection of papers giving insight into the fundamentals and applications of nanoscale devices. The papers have been presented at the NATO Advanced Research Workshop on Nanoscale Devices - Fundamentals and Applications (NDFA-2004, ARW 980607) held in Kishinev (Chisinau), Moldova, on September 18-22, 2004. The main focus of the contributions is on the synthesis and characterization of nanoscale magnetic materials, the fundamental physics and materials aspects of solid-state nanostructures, the development of novel device concepts and design principles for nanoscale devices, as well as on applications in electronics with special emphasis on defence against the threat of terrorism.
This book addresses the current status, challenges and future directions of data-driven materials discovery and design. It presents the analysis and learning from data as a key theme in many science and cyber related applications. The challenging open questions as well as future directions in the application of data science to materials problems are sketched. Computational and experimental facilities today generate vast amounts of data at an unprecedented rate. The book gives guidance to discover new knowledge that enables materials innovation to address grand challenges in energy, environment and security, the clearer link needed between the data from these facilities and the theory and underlying science. The role of inference and optimization methods in distilling the data and constraining predictions using insights and results from theory is key to achieving the desired goals of real time analysis and feedback. Thus, the importance of this book lies in emphasizing that the full value of knowledge driven discovery using data can only be realized by integrating statistical and information sciences with materials science, which is increasingly dependent on high throughput and large scale computational and experimental data gathering efforts. This is especially the case as we enter a new era of big data in materials science with the planning of future experimental facilities such as the Linac Coherent Light Source at Stanford (LCLS-II), the European X-ray Free Electron Laser (EXFEL) and MaRIE (Matter Radiation in Extremes), the signature concept facility from Los Alamos National Laboratory. These facilities are expected to generate hundreds of terabytes to several petabytes of in situ spatially and temporally resolved data per sample. The questions that then arise include how we can learn from the data to accelerate the processing and analysis of reconstructed microstructure, rapidly map spatially resolved properties from high throughput data, devise diagnostics for pattern detection, and guide experiments towards desired targeted properties. The authors are an interdisciplinary group of leading experts who bring the excitement of the nascent and rapidly emerging field of materials informatics to the reader.
This textbook introduces step by step the basic numerical methods to solve the equations governing the motion of the atmosphere and ocean, and describes how to develop a set of corresponding instructions for the computer as part of a code. Today's computers are powerful enough to allow 7-day forecasts within hours, and modern teaching of the subject requires a combination of theoretical and computational approaches. The presentation is aimed at beginning graduate students intending to become forecasters or researchers, that is, users of existing models or model developers. However, model developers must be well versed in the underlying physics as well as in numerical methods. Thus, while some of the topics discussed in the modeling of the atmosphere and ocean are more advanced, the book ensures that the gap between those scientists who analyze results from model simulations and observations and those who work with the inner works of the model does not widen further. In this spirit, the course presents methods whereby important balance equations in oceanography and meteorology, namely the advection-diffusion equation and the shallow water equations on a rotating Earth, can be solved by numerical means with little prior knowledge. The numerical focus is on the finite-difference (FD) methods, and although more powerful methods exist, the simplicity of FD makes it ideal as a pedagogical introduction to the subject. The book also includes suitable exercises and computer problems. |
You may like...
Handbook on the Physics and Chemistry of…
Jean-Claude G. Bunzli, Vitalij K Pecharsky
Hardcover
R5,660
Discovery Miles 56 600
|