Bulletin -13

Bulletin number  -13
29 May 2006

INDEX:


Painting with Numbers

The ICM2006 Fractal Art Competition Hill be decided in June

Equations are a way of describing reality, but some equations can be used the other way round; that is, to generate artificial worlds. Equations describing fractal sets, for example, can give rise to landscapes as fascinating as those depicted in famous canvases. Does this sound impossible? Before jumping to conclusions, it would be better to take a look at the works on display at the Exhibition of Fractal Art, which forms part of the ICM2006 International Congress of Mathematicians, to be held from August 22nd to 30th at the ICM2006 venue and at the Centro Cultural Conde Duque in Madrid. Many of the works exhibited will come from the Benoit Mandelbrot ICM2006 International Competition of Fractal Art, the prizes for which will be announced in June. 

The ICM2006 Exhibition of Fractal Art will be held thanks to support from the Fundación Española de Ciencia y Tecnología (Fecyt – Spanish Foundation and Science and Technology). More than three hundred entries have been received from all over the world for the Fractal Art Competition. The jury will be chaired by Benoit Mandelbrot himself, widely recognized as the “father” of fractal geometry. 

What are fractals? It is not necessary to enter into a complicated mathematical description to get an intuitive grasp of what they are: structures which, “when a small portion is observed, preserve a similar, although not necessarily identical appearance to what they look like when observed in their entirety”, explains Javier Barrallo, one of the organizers of the fractal art competition, and a fractal artist himself. Some examples of fractals are: a tree and its branches; a cauliflower, apparently made up of endless cauliflowers joined together; the coastline of a country… 

The example of a coastline serves to explain another property of fractals; the fact that no matter how small the scale at which they are observed – however close you “zoom in” – they always keep the same appearance, and so on to infinity.  Obviously, a coastline is not infinite – “authentic” fractals are a mathematical idealization – but the effect of the fractal phenomenon can be seen to be real in the “coastline paradox”. When a coastline, or for that matter any rough surface, is measured, the result will vary according to the accuracy desired: if one takes into account the shape of bays, of rocks, of grains of sand, etc., the coastline will get theoretically longer and longer, and in an ideal fractal it would be infinite.

 
Is it really art?
Beyond the strictly aesthetic qualities, in the opinion of some people these properties give added value to works of “fractal art”. However, these works have not always been considered art. Are they not merely a computer-generated graphic representation of a formula? Yes and no, reply the authors of fractal art. What follows is a brief explanation of how a fractal is painted.

The point of departure is indeed a mathematical formula. The first fractal formulae were described more than a century ago. Today there are hundreds. And yes, as Barrallo explains, the computer is vital: “A small image, one of 640 x 480 pixels, for example, contains 307,200 dots that must be calculated. It may be necessary to calculate each one of these dots about 1,000 times by the formula determining the fractal. This means that the formula must be calculated more than 300 million times. And this is just for a small-size image!”. 

So, armed with both formula and computer, we must now proceed to iteration. This involves “calculating a formula over and over again, starting from its initial value”, says Barrallo. “After calculating the formula for the first time, we take the resulting value and introduce it into the formula. The new result is calculated again, and so on successively”. In the case of fractals, the initial value has to do with the position of the dot in the frame (the pixel on the screen).

Then colours are assigned according to the value of each dot. The fact that the behaviour of two dots situated very close together can be radically different – one diverging toward the infinite and the other converging toward a given value - is “what makes fractal exploration so fascinating”, says Barrallo. And what leads to the explosion of shapes and colours in the image. 

But this is not in the least due exclusively to the computer. “An image of 800 x 600 dots contains 480,000 pixels, or dots on the screen, which can be combined in an image in 103467865 different ways; that is, 10 followed by more than three million zeros. A computer does not possess the capacity to select images from among such an immense collection and determine which are beautiful and which are not”. It is the hand, or in this case, the brain, of the artist that are the vital factors. Furthermore, as in all art, and in mathematics themselves, fractal art is in a constant state of evolution. The algorithms currently employed have little to do with those employed twenty years ago. 

For further information:
Interview with Benoit Mandelbrot in InfoICM2006-05-24
/prensa/boletines/boletin19/#mandelbrot

Fractal Art Competition web site:
http://www.fractalartcontests.com/2006/

Information about fractals:
http://www.divulgamat.net (in “Virtual Exhibitions” and “Art and Mathematics”)
www.fractalus.com


Interview with Marta Sanz-Solé, President  of the ICM2006 Local Programme Committee

“This ICM places the accent on the fruitful interaction among the different areas of mathematics”

Marta Sanz-Solé teaches at the University of Barcelona, the same university from which she graduated (1974) and gained her doctorate (1978), and where she has been Dean of the Faculty of Mathematics and vice-president of the Division of Sciences. She has spent time doing research in the USA, Italy, France and Switzerland, and her research work has been centered on Malliavin’s calculus and stochastic analysis. She is the author of some 80 publications, serves as a member on various committees and has participated in the organization of numerous congresses and events. In recent months she has devoted much time and effort as a member of the Organizing Committee to the ICM2006 World Congress of Mathematicians, in particular as the president of the Local Programme Committee in charge of organizing the scientific content of the event.

How has the scientific programme for the ICM2006 been drawn up?
There is a committee responsible for deciding on the number of scientific sections in the congress and their content. On this occasion, the ICM2006 has been divided into 20 different sections. This Programme Committee is named by the International Mathematical Union executive committee, and it also responsible for putting forward the names of invited speakers, both for the plenary lectures and the talks given in each section.

¿What is the task of the committee you are chairing?
The members of the LPC (Local Programme Committee) work in co-ordination with the Programme Committee, and our main job is the organization of the congress programme. It’s a question of arranging the agenda in a coherent manner. You have to take into account the fact that invited talks in each section must necessarily overlap, so they have to be scheduled in such a way that related fields do not coincide with each other, as far as this is possible, because all the talks are of interest to the majority of the audience. We are also responsible for the schedule of the plenary programme, since this will highlight trends and set the pace of the congress.

Don’t you find it frustrating that others decide on the content?
Those are the rules of the game. However, we have a certain scope for movement. The Organizing Committee of the congress has the prerogative of proposing one plenary lecturer and three section speakers, and it has delegated this choice to the LPC. Furthermore, we are responsible for many other activities in the programme, such as Special Activities and Other Activities. Personally, I’ve been deeply involved in some of these scientific activities. For example, I’m the organizer of the Closing Round Table, an activity which has been included in the ICM for the very first time. All the panellists are prestigious mathematicians, one of whom is Lennart Carleson, the winner of this year’s Abel Prize. The title of the round table - Are Pure and Applied Mathematics Drifting Apart? – is a reflection of the interest in the debate about this delicate dovetailing between two aspects of the profession, as well as the need to work closely together to make important advances in an eminently technological society. In fact, the programme for this ICM places the accent on the fruitful interaction among the different fields of mathematics, which until a short time ago were following divergent paths.

One of these ‘special activities’ is devoted to promulgation: Are you worried about the public image of mathematics?
Of course; most people don’t even realize that mathematics are useful, or even that they are present in our daily activity. Their intellectual value and appeal remain largely unknown. The image people have of them is of something boring; it’s a subject surrounded by a lack of understanding and a lack of communication. However, we intend to deal with this question at a round table proposed by the European Mathematical Society, in which I am also involved as joint organizer, and which has even broader objectives. The debate will revolve around how to make mathematics more accessible to scientists working in other disciplines; how to communicate the values of our research to politicians, who are the ones who decide on funding for research, and how to convey the real values of mathematics to young people who are on the point of choosing their future university education and PhD courses.

In addition to the invited speakers, there are also other contributions: Are you and your colleagues involved in the process of selection for these?
Certainly; I was about to explain this before. This is a very important part of the congress, and we on the committee have been responsible for organizing all of it. We were responsible for the “call for abstracts” in three different categories: oral communications, posters, and contributions on mathematical software. Then we evaluated the abstracts submitted for these contributions and programmed their presentation in the appropriate sessions.

How many were submitted and how many have been selected?
Well, the question is how many will be presented during the congress, because the figures may vary. Approximately 1,600 were submitted. After the evaluation process, and taking into account the withdrawals, we have at the moment about 1,400. However, we know from past experience that some of those who have submitted contributions will not actually attend the congress. I would hazard a guess that the final count will be about 800 oral communications, 300 posters, and 25 mathematical software presentations, which will be a great success in terms of participation.

¿Does this represent any change in comparison with previous congresses?

I think the final figures will be similar to those of the ICM 98 in Berlin. However, we’ve introduced some changes, such as increasing the time for oral communications from 15 to 20 minutes. We’ve also made a special effort to promote the presentation of posters. There’s not much tradition of posters among mathematicians, even though they are more informal, can be much richer, and facilitate greater interaction with people interested in the subject. One of our initiatives has been to organize a competition with prizes for the best posters in terms of presentation, visual quality and content. There are two prizes in each section, although they can also be declared void.

How many people are on the LPC?
There are nine members in all, covering a broad range of mathematical fields, although each one is working in collaboration with between ten and fifteen other people on the evaluation process and other tasks, because there’s a great deal of work to do.

Marta Sanz personal web page
http://www.mat.ub.es/~sanz

President of the IMU Programme Committee
Noga Alon, Universidad de Tel-Aviv
nogaa@tau.ac.il
http://www.math.tau.ac.il/~nogaa

Local Programme Committee (LPC):
/organization/localprogramcommittee/



Plenary Session: Iain Johnstone

How to Convert Data into Information

For some time now the following wording can be found in many job advertisements: “Minimum requirements: Experience in the implantation of Information Management Systems (Business Intelligence, Data Warehousing, Data Mining)". Given the growing amount of data handled in many sectors, experience in information management is becoming increasingly necessary. The mathematician Iain Johnstone will give a plenary lecture at the ICM2006 on “High Dimensional Statistical Inference and Random Matrices”, which will deal with the management of massive amounts of data.
Until recently, statistics was centered on the study of one- and multi-dimensional random variables. However, the development of computation has led to the era of “data mining", and all organizations – banks, hospitals, research centres – handle enormous quantities of data which must often be constantly available, such as financial assets. Statistics in high dimensions are essential for analysing this data, and this will be the central theme of Johnstone’s lecture. This branch of mathematics shows how to organize and summarize data, whether it involves an electrocardiogram, Internet traffic or a stocks and shares, in such a way that they provide useful information. 
Iain Johnstone was born in Melbourne in 1956. In 1977 he graduated in mathematics at the Australian National University, specializing in pure mathematics and statistics. He obtained his doctorate in statistics from Cornell University in 1981. Since then he has been associated with Stanford University in California, where in 1992 he became professor of statistics and biostatistics. In addition to his work on biostatistics, in the field of statistics he has received much recognition.

Lecturer: Iain Johnstone
“High Dimensional Statistical Inference and Random Matrices”
Date: Friday, August 25th: 10:15-11:15

ICM2006 Scientific Programme
/scientificprogram/plenarylectures/

More about Iain Johnstone:
http://www-stat.stanford.edu/people/faculty/johnstone/
http://hcr3.isiknowledge.com/author.cgi?&link1=Browse&link2=Results&id=1345


Plenary Session:

Mathematical Foundations of the Computational Sciences

One of the main mathematical problems in theoretical computer science is that known as P vs NP. A simple example suffices to understand the nature of this problem:

Suppose that we wish to select a group of one hundred people from a total of four hundred candidates. Selection must be carried out according to certain determining criteria (for example, in accordance with a list of incompatible pairs; Tom and Dick cannot be together, neither can Harry or So-and-So, nor Tom with Harry etc.).

Bear in mind that the total number of ways of selecting one hundred elements out of four hundred easily exceeds the number of atoms making up the known universe. Not even an exhaustive search by means of a  super-computer would be capable of covering every possible combination.

This is an example of what is known as an NP problem, whose main characteristic is that it is (relatively) easy to check whether a particular selection satisfies the given criteria. However, the task of generating a solution directly is in general quite difficult. P problems, on the other hand, are those for which direct methods exist for providing solutions (relatively easy).

The P vs NP problem consists in providing a problem for which a possible solution can be easily checked, but which requires an excessively long time for solutions to be found by direct methods, or for demonstrating that such problems do not exist. At present, the majority feeling in the scientific community is that such problems do indeed exist. Paradoxically,  increasingly efficient algorithms are being found for problems traditionally considered difficult to solve. Manindra Agrawal has this year been awarded the Gödel Prize from the European Association for Theoretical Computer Science for demonstrating that the problem for determining whether a number is prime belongs to class P.

In fact, for many problems of practical importance, methods based on carrying out a random selection and checking that it satisfies the appropriate restrictions have proved to be simpler and faster than the best direct algorithms known to date. Similarly in combinatorics, objects exist (such as self-correcting codes) whose existence is easy to check by means of probabilistic methods, but for which only explicit constructions are available that are very complex for approximating optimal solutions.

It is perhaps surprising to learn that in recent years results have been obtained which suggest that every random algorithm can be simulated by a deterministic algorithm of comparable efficiency. As an example we have Agrawal’s deterministic algorithm for checking if a number is prime in polynomial-time, and Omer Reingold’s deterministic algorithm for solving problems of connectivity in undirected graphs having less than linear logarithmic complexity with regard to the memory required.

Worthy of mention in relation to this are Ronnit Rubinfeld researches, which are centered on the study of algorithms of complexity less than linear, that is to say, sublinear. At a time when enormous amounts of data must be handled, algorithms of linear complexity can prove to be impracticable. Many interesting problems exist for which algorithms of sublinear complexity are known, although they are often random and provide approximate solutions. With regard to this point, Luca Trevisan will speak on the elimination of randomness, quasi-randomness and the direct constructions of combinatorial objects such as error-correcting codes. 

Jon Kleinberg’s talk will deal with graphs in which any pair of nodes are linked by a short length path (small world graphs) and with random methods of finding such paths. This avenue of research has applications to the theory of algorithms and to discrete probability.

Tim Roughgarden will address the connections between theoretical computer science and game theory, known as algorithmic game theory, with particular attention to the use of potential functions for delimiting the equilibria inefficiency of different models of selfish behaviour in networks. An example of this behaviour can be found in the well-known dilemma posed by two prisoners in solitary confinement who are given the choice between two options; if, because of their isolation from each other, the behaviour of each one is governed by self-interest, then the final result will be negative for both.

For his part, Alexander Holevo will present results related to quantum computation, which is a paradigm of computation based on quantum mechanics alternative to the classical paradigm, in which quantum bits are used instead of customary bits. The quantum paradigm makes new algorithms possible, and the same task may involve different complexity in classical computation and in quantum computation, all of which has aroused great expectation, since it renders some formerly intractable problems tractable. It is worth pointing out, for example, that Peter Shor was awarded the Nevanlinna Prize in 1998 for his polynomial complexity factoring algorithm based on quantum computation.

Manuel Ojeda Aciego
Lecturer in Applied Mathematics at the University of Málaga.



Satellite Conferences: Gran Canaria

Intelligent Computation Imitates the “Practical Wisdom” of Living Beings

The possibility of designing systems that imitate how human beings, animals and other living beings process information and solve problems has long been a dream of Computational Science. Today, after half a century of research work, techniques have been developed which provide benefits for a broad spectrum of sectors. Neuronal networks, expert systems, diffuse logic and evolutionary computation are just some of the techniques that have proved capable of identifying complex patterns in vast sets of data, providing support for decision-making based on qualitative and quantitative factors, and for giving solutions to problems involving high-complexity variables. Armed with these resources, computer science is now able to tackle designs for increasingly complex applications, manage greater volumes of information, seek “intelligent” solutions and discover unsuspected correlations in data bases.
The Fifth International Conference on Engineering Computational Technology and the Eighth International Conference on Computational Structures Technology, the two symposia to be held in Las Palmas de Gran Canaria this September, will cover all these topics, as well as debating the latest computational technology applications to all aspects of engineering, structural mechanics and other fields. Furthermore, the latest advances in hardware and software, algorithms and theoretical development will also be discussed. 
The two conferences will be held concurrently, and participants will be able to attend the talks given at both. In addition to mathemat-icians and engineers, scientists from other specialized fields are expected to attend.

 

                                      “The Fifth International Conference
on Engineering Computational Technology”
Person to contact: Gustavo Montero
e-mail: gustavo@dma.ulpgc.es
web: http://www.civil-comp.com/conf/ect2006.htm
“The Eighth International Conference
on Computational Structures Technology”
Person to contact: Rafael Montenegro
e-mail:  rafa@dma.ulpgc.es
web: http://www.civil-comp.com/conf/cst2006.htm
Las Palmas de Gran Canaria
12-15 September 2006


Applications

Encrypted Messages

Exchange of information is an on-going process in 21st century society, where data must travel rapidly and constantly, whether it be in the form of bank transfers, telephone conversations or official documents. The robustness of the entire system rests on the possibility of encrypting information to enable it to be transmitted quickly and safely without being used or intercepted by prying eyes. Once again, mathematics has a key role to play in this process. According to Alejandro Melle, professor of algebra at the Complutense University of Madrid, there are many theoretical systems envolved in the field of encryption. However, most of them cannot be used in applications because they are insufficient for ensuring a secure and fluid exchange of information.
The mathematical community is hard at work in both the generation of encryption algorithms and in cryptoanalysis; that is, the breaking of encryption algorithms. This is the combination that makes security really effective, since cryptographic algorithms must necessarily belong to the public sphere, so that the security of the cryptosystem is based on mathematics and not on secrecy. Trends in security processes and protocols currently used in the world are largely set by the NSA (National Security Agency) and by the NIST (National Institute of Standards and Technology), both North American organizations. Furthermore, the most frequently employed encryption protocols are based on two fundamental problems of mathematics: “the big number factorization problem” and “the discrete logarithm problem”.
In particular, the RSA protocol, which is the most commonly used, revolves around the idea of the big number factorization problem: given a very large number N, it is very difficult to find its prime factors (p, q), such that N = p x q. Nevertheless, even though it is difficult to find these factors, the increased availability of more powerful computers requires the use of ever larger numbers in order to prevent problems from occurring. As Melle explains, it is customary now to work with key sizes of 1024 bits or even 2048 bits, and the greater the key size, the slower the speed of operation. It is for that reason that the Fábrica Nacional de la Moneda y Timbre (National Mint), which acts as the State Certifying Authority (responsible for issuing digital certificates for official transactions with the Government), officially advises against the use of key sizes of 2048 bits.
Moreover, protocols based on the discrete logarithm problem either work on finite bodies or on elliptical curves on finite bodies. Cryptography based on elliptical curves ensures security for systems with much smaller key sizes, and is therefore used in supports where storage space is a determining factor, such as in credit cards.

For further information:

Alejadro Melle: amelle@mat.ucm.es

Cryptomathic company
http://www.cryptomathic.com/labs/ellipticcurves.html

Second Cryptography Hash Workshop
ttp://www.csrc.nist.gov/pki/HashWorkshop/index.html

Mathematics and Internet Security
http://www.mathaware.org/mam/06/