.comment-link {margin-left:.6em;}

2Math

Friday, April 25, 2008

Mapping the Math in Music

The figure shows how geometrical music theory represents four-note chord-types -- the collections of notes form a tetrahedron, with the colors indicating the spacing between the individual notes in a sequence. In the blue spheres, the notes are clustered, in the warmer colors, they are farther apart. The red ball at the top of the pyramid is the diminished seventh chord, a popular 19th-century chord. Near it are all the most familiar chords of Western music [Image: Dmitri Tymoczko]

More than 2000 years ago Pythagoras reportedly discovered that pleasing musical intervals could be described using simple ratios. And the so-called musica universalis or "music of the spheres" emerged in the Middle Ages as the philosophical idea that the proportions in the movements of the celestial bodies -- the sun, moon and planets -- could be viewed as a form of music, inaudible but perfectly harmonious.

Now, three music professors – Clifton Callender at Florida State University, Ian Quinn at Yale University and Dmitri Tymoczko at Princeton University -- have devised a new way of analyzing and categorizing music that takes advantage of the deep, complex mathematics they see enmeshed in its very fabric.

In a recent article in 'Science', the trio has outlined a method called "geometrical music theory" that translates the language of musical theory into that of contemporary geometry. They take sequences of notes, like chords, rhythms and scales, and categorize them so they can be grouped into "families." They have found a way to assign mathematical structure to these families, so they can then be represented by points in complex geometrical spaces, much the way "x" and "y" coordinates, in the simpler system of high school algebra, correspond to points on a two-dimensional plane. Different types of categorization produce different geometrical spaces, and reflect the different ways in which musicians over the centuries have understood music.

This achievement, they expect, will allow researchers to analyze and understand music in much deeper and more satisfying ways. The method, according to its authors, allows them to analyze and compare many kinds of Western (and perhaps some non-Western) music. (The method focuses on Western-style music because concepts like "chord" are not universal in all styles.) It also incorporates many past schemes by music theorists to render music into mathematical form."The music of the spheres isn't really a metaphor -- some musical spaces really are spheres," said Tymoczko, an assistant professor of music at Princeton. "The whole point of making these geometric spaces is that, at the end of the day, it helps you understand music better. Having a powerful set of tools for conceptualizing music allows you to do all sorts of things you hadn't done before."

The work represents a significant departure from other attempts to quantify music, according to Rachel Wells Hall of the Department of Mathematics and Computer Science at St. Joseph's University in Philadelphia. In an accompanying essay, she writes that their effort, "stands out both for the breadth of its musical implications and the depth of its mathematical content."

Reference
"Generalized Voice-Leading Spaces"
Clifton Callender, Ian Quinn, Dmitri Tymoczko,
Science, Vol. 320. no. 5874, pp. 346 - 348 (18 April 2008),
Abstract Link

[This posting is based on a press release by Princeton University]

Thursday, April 17, 2008

Supercomputer Simulates Merger of Three Black Holes

Simulated paths of three black holes merging. (Image courtesy: Rochester Institute of Technology)

The same team of astrophysicists that cracked the computer code simulating two black holes crashing and merging together has now, for the first time, caused a three-black-hole collision.

Manuela Campanelli, Carlos Lousto and Yosef Zlochower—scientists in Rochester Institute of Technology’s Center for Computational Relativity and Gravitation—simulated triplet black holes to test their breakthrough method that, in 2005, merged two of these large mass objects on a supercomputer following Einstein’s theory of general relativity.

The new simulation of multiple black holes evolving, orbiting and eventually colliding confirmed a robust computer code free of limitations. The May issue of Physical Review D will publish the team’s latest findings in the article “Close Encounters of Three Black Holes,” revealing the distinct gravitational signature three black holes might produce. The story will run under the “Rapid Communications” section.

“We discovered rich dynamics leading to very elliptical orbits, complicated orbital dynamics, simultaneous triple mergers and complex gravitational waveforms that might be observed by gravitational wave detectors such as LIGO and LISA,” says Lousto, professor in RIT’s School of Mathematical Sciences. “These simulations are timely because a triple quasar was recently discovered by a team led by Caltech astronomer George Djorgovski. This presumably represents the first observed supermassive black hole triplet.”

The RIT team’s triple merger simulates the simplest case of equal masses and nonspinning black holes, a prerequisite for exploring configurations of unequal masses and different spins and rotations. The center’s supercomputer cluster “newHorizons” processed the simulations and performed evolutions of up to 22 black holes to verify the results.

“Twenty-two is not going to happen in reality, but three or four can happen,” says Yosef Zlochower, an assistant professor in the School of Mathematical Sciences. “We realized that the code itself really didn’t care how many black holes there were. As long as we could specify where they were located—and had enough computer power—we could track them.”

Specially designed high-performance computers like newHorizons are essential tools for scientists like Campanelli’s team who specialize in computational astrophysics and numerical relativity, a research field dedicated to proving Einstein’s theory of general relativity. Only supercomputers can simulate the force of impact necessary to generate gravity waves—warps in space-time that might provide clues to the origin of the universe.

Scientists expect to measure actual gravity waves for the first time within the next decade using the ground-based detector known as the Laser Interferometer Gravitational Wave Observatory (LIGO) and the future NASA/European Space Agency space mission Laser Interferometer Space Antenna (LISA).

“In order to confirm the detection of gravitational waves, scientists need the modeling of gravitational waves coming from space,” says Campanelli, director of RIT’s Center for Computational Relativity and Gravitation. “They need to know what to look for in the data they acquire otherwise it will look like just noise. If you know what to look for you can confirm the existence of gravitational waves. That’s why they need all these theoretical predictions.”

Adds Lousto: “Gravity waves can also confirm the existence of black holes directly because they have a special signature. That’s what we’re simulating. We are predicting a very specific signature of black hole encounters. And so, if we check that, there’s a very strong evidence of existence of black holes.”

For more information about RIT’s Center for Computational Relativity and Gravitation, visit CCRG. To see a visualization by CCRG member Hans-Peter Bischof tracing the interaction of a trio of same-sized masses, click on CCRG Movies.

Facts about newHorizons:

The “newHorizons” computer cluster at RIT’s Center for Computational Relativity and Gravitation boasts 85 nodes—each with its own dual processor—with four amounts of computing units per node, high-speed Infiniband interconnections and 36 terabytes of storage space.

The standard Mac desktop computer has 2 gigabytes of memory. By comparison, each node in newHorizons has 16 gigabytes or a total of 1.4 terabytes of memory. In addition, infinite band technology makes the computer especially fast, moving “packages” of information with a lag time or latency of 1.29 microseconds. The high- performance computer built with hardware from California-based Western Scientific will operate at its maximum capacity 24 hours a day for four to five years.

This posting is based on materials provided by Rochester Institute of Technology.

Wednesday, April 09, 2008

SerialXpress® Creates Waveforms for Data Receiver Testing

Tektronix LogoBeaverton, Oregon based Tektronix is a leading supplier of test, measurement, and monitoring products, solutions and services for the communications, computer, and semiconductor industries -- as well as military/aerospace, consumer electronics, education and a broad range of other industries worldwide. The company has operations in 19 countries worldwide.

Tektronix developed SerialXpress, an advanced new software package that performs direct synthesis of waveforms for high-speed serial data receiver testing, ideal for testing SATA, SAS, PCI-Express, HDMI, and DisplayPort serial data standards, as well as any other serial bus technology operating at speeds up to 6 Gbps. SerialXpress manages the creation of these waveforms for high-speed transmission on the AWG7000 Arbitrary Waveform Generator series.

An easy to use interface provided by the SerialXpress software package makes the creation and management of high speed serial waveforms far more intuitive, eliminating the need for multiple instruments and complicated test configurations.

Next generation high-speed serial standards ranging from 3 to 6 Gb/s have diminishing timing margins that require receiver characterization to complement conventional transmitter testing. Increases in transmission speeds and trace length of transmission lines heighten the potential for errors in the signal path (channel). Engineers need to simulate these effects during the design phase to test the tolerance of their receiver designs.

SerialXpress software provides these capabilities working in tandem with the AWG7000 arbitrary waveform generators through use of direct synthesis. Direct synthesis is a flexible and repeatable method for creating ideal or impaired waveforms incorporating periodic & random Jitter, pre/de-emphasis, channel emulation, Idle-states, and SSC parameters. This often eliminates the need for multiple test instruments such as sine wave generators, noise generators, power dividers and power combiners that not only complicate the entire test set-up but also require extensive calibration.

Engineers are able to simply recall a setup file that encapsulates all of the relevant standard mandated tests and signal impairments, without the need for any additional external components or mixing required, reducing complexity and cost.

For more details, visit www.tek.com/serial_data

Wednesday, April 02, 2008

Imagine.Lab OPTIMUS

LMS LogoLeuven, Belgium based LMS is an engineering innovation partner for companies in the automotive, aerospace and other advanced manufacturing industries. LMS enables its customers to get better products to market faster, and turn superior process efficiency to their strategic competitive advantage. LMS delivers a unique combination of virtual simulation software, testing systems, and engineering services. The company is focused on the mission critical performance attributes in key manufacturing industries, including structural integrity, handling, safety, reliability, comfort and sound quality.

Recently LMS International launched its Imagine.Lab OPTIMUS. With the integration of OPTIMUS, Imagine.Lab AMESim gains new capabilities to capture and automate 1D simulation processes that allow for the quick assessment of multiple design options. This new optimisation module also enables design and engineering teams to automatically select the optimal design, taking into account multiple performance targets and Six Sigma criteria.

LMS Imagine.Lab AMESim offers a complete 1D simulation platform to model and analyse multi-domain, intelligent systems and predict their multi-disciplinary performance. OPTIMUS was developed in collaboration with Noesis Solutions, a subsidiary of LMS, specialising in developing solutions for process integration and design optimisation.

Through the interactive interface, users can capture the different steps and parameter settings in their simulation process. Once captured, non-expert users can apply the complete process without having to worry about what to do and when to do it. A quick rerun of this process lets users explore multiple design alternatives, which translates into higher productivity.LMS Imagine.Lab OPTIMUS automatically explores a multitude of design alternatives using design of experiment and response surface modeling techniques.

For more information, visit http://www.lmsintl.com/.