Search  /  Búsqueda

www.fgalindosoria.com/informatica/methods/search/

 

Fernando Galindo Soria

 

www.fgalindosoria.com             fgalindo@ipn.mx

Red de Desarrollo Informático   REDI

 

Creación de la página www:   Cd. de México a 28 de Julio del 2010

ultima actualización 1 de Agosto del 2010

 

¿Buscar una aguja en un pajar?, eso no tiene problema, el problema es encontrar una aguja especifica en un pajar lleno de agujas.

Algunas soluciones para encontrar una aguja en un pajar:

quemar el pajar y sacar la aguja.

Pasar un electroimán gigante mientras se cierne la paja.

Quemar el pajar y pasar un electroimán gigante.

 

Search a needle in a haystack?, That has no problem, the problem is to find a needle specific  in a haystack full of needles.

Some solutions to find a needle in a haystack:

burn the barn and remove the needle.

Spend a giant electromagnet while straw looms.

Burn the barn and spend a giant electromagnet

Fernando Galindo Soria,  1980

 

 

El problema de la búsqueda crea un espacio disciplinar dentro de la Informática que tiene sus propios fundamentos, teorías, leyes, métodos, técnicas y herramientas.

 

La búsqueda es el proceso general de la Informática orientado a encontrar algo, por ejemplo: una persona, animal o cosa en algún lugar, ciudad, bosque, océano,...; un dato en una biblioteca, libro, diccionario,...; el patrón o la forma general presente en múltiples objetos, procesos, fenómenos; la cura de una enfermedad; una nueva teoría, teorema o ecuación; la búsqueda de una mejor calidad de vida, o escuela donde estudiar, o lugar donde vivir; las llaves del carro o el mismo carro; la información de algo o alguien en una base de datos; un virus en un sistema informático; un error en un documento, teoría, demostración, programa, ...; una buena fotografía; etc., etc.

 

O sea que cuando se habla de búsqueda, se habla de buscar datos, imágenes, objetos, teoría, leyes, métodos, herramientas, ecuaciones, objetos en un espacio, cosas perdidas, nuevos caminos, etc., etc.

 

Como se ve es una cantidad extremadamente disímbola de casos y situaciones, por lo que se tiene un espacio enorme de investigación, donde la labor del informático es encontrar las teorías, teoremas, ecuaciones, métodos, técnicas, herramientas, etc., que permitan encontrar los problemas y soluciones generales y como aplicarlas a los diferentes casos.

http://www.fgalindosoria.com/informatica/methods/search/notes/search.htm

 

*************************************************************************

The search problem creates a disciplinar space within the Informatics having its own fundaments, theories, laws, methods, techniques and tools.

 

The search is the general process of Informatics aimed at finding something, for example: a person, animal or thing somewhere, city, forest, ocean,...; an entry in a library book, dictionary, ...; the pattern generally present in multiples objects, processes, phenomena; the cure of a disease; a new theory, theorem or equation; seeking a better quality of life, or school to study, or place to live; the car keys or the same car; information about something or someone in a database; a virus in a computer system; an error in a document, theory, demonstration, program, ...; a good photograph; etc. etc.

 

So when talking about search, speaking of search of data, images, objects, theories, laws, methods, tools, equations, objects in a space, things lost, new roads, etc. etc..

 

As seen is an amount extremely disímbola of cases and situations , so there is a huge research area where the work of the informatics is  find theories, theorems, equations, methods, techniques, tools, etc.., that allow to find the problems and general solutions and how to apply to the various cases.

http://www.fgalindosoria.com/informatica/methods/search/notes/search.htm

 

 

 

 

 

INDEX

INDICE

1 General Areas

2 Particular Methods

2.1 Particle swarm

2.2 Ant colony algorithms

2.3 Simulated annealing (SA)

2.4 Tabu search

2.5 Harmony search

2.6 Beam search

2.7 String searching algorithm

3 Papers

3.1 Swarm search

3.2 Space search

1 Áreas Generales

2 Métodos Particulares

2.1 Enjambres de partículas

2.2 Algoritmo de hormigas

2.3 Recocido simulado

2.4 Búsqueda tabú

2.5 Harmony search

2.6 Beam search

2.7 Algoritmos de búsqueda de subcadenas

3 Artículos

3.1 Búsqueda mediante enjambres

3.2 Espacio de Búsqueda

 

 

 

LINKS  /  LIGAS

 

 

 

*******************************************************************

General Áreas   /   Áreas Generales

 

Category:Search algorithms

http://en.wikipedia.org/wiki/Category:Search_algorithms

 

Categoría: Algoritmos de búsqueda

http://es.wikipedia.org/wiki/Categor%C3%ADa:Algoritmos_de_b%C3%BAsqueda

 

 

*************************************

Search algorithm

“In computer science, a search algorithm, broadly speaking, is an algorithm for finding an item with specified properties among a collection of items. The items may be stored individually as records in a database; or may be elements of a search space defined by a mathematical formula or procedure, such as the roots of an equation with integer variables; or a combination of the two, such as the Hamiltonian circuits of a graph.”

http://en.wikipedia.org/wiki/Search_algorithm

 

 

Espacio de búsqueda

Ejemplo de espacio de búsqueda. Gráfica de una función con múltiples óptimos locales en 2 dimensiones.

“En optimización, espacio de búsqueda se refiere al dominio de la función a ser optimizada. En el caso de los algoritmos de búsqueda, que manejan espacios discretos, se refiere al conjunto de todas las posibles soluciones candidatas a un problema.”

http://es.wikipedia.org/wiki/Espacio_de_b%C3%BAsqueda

 

 

*************************************

Metaheuristic

“In computer science, metaheuristic designates a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Metaheuristics make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics do not guarantee an optimal solution is ever found. Many metaheuristics implement some form of stochastic optimization.

Other terms having a similar meaning as metaheuristic, are: derivative-free, direct search, black-box, or indeed just heuristic optimizer. Several books and survey papers have been published on the subject”

http://en.wikipedia.org/wiki/Metaheuristic

 

Metaheurística

“Una metaheurística es un método heurístico para resolver un tipo de problema computacional general, usando los parámetros dados por el usuario sobre unos procedimientos genéricos y abstractos de una manera que se espera eficiente. Normalmente, estos procedimientos son heurísticos. El nombre combina el prefijo griego "meta" ("más allá", aquí con el sentido de "nivel superior") y "heurístico" (de ευρισκειν, heuriskein, "encontrar").

Las metaheurísticas generalmente se aplican a problemas que no tienen un algoritmo o heurística específica que dé una solución satisfactoria; o bien cuando no es posible implementar ese método óptimo. La mayoría de las metaheurísticas tienen como objetivo los problemas de optimización combinatoria, pero por supuesto, se pueden aplicar a cualquier problema que se pueda reformular en términos heurísticos, por ejemplo en resolución de ecuaciones booleanas.

Las metaheurísticas no son la panacea y suelen ser menos eficientes que las heurísticas específicas, en varios órdenes de magnitud, en problemas que aceptan este tipo de heurísticas crudas.”

http://es.wikipedia.org/wiki/Metaheur%C3%ADstica

 

 

Hyper-heuristic

http://en.wikipedia.org/wiki/Hyper-heuristics

 

Hyper-heuristics versus meta-heuristics

“The fundamental difference between metaheuristics and hyper-heuristics is that most implementations of metaheuristics search within a search space of problem solutions, whereas hyper-heuristics always search within a search space of heuristics. Thus, when using hyper-heuristics, we are attempting to find the right method or sequence of heuristics in a given situation rather than trying to solve a problem directly. Moreover, we are searching for a generally applicable methodology rather than solving a single problem instance”

http://en.wikipedia.org/wiki/Hyper-heuristics#Hyper-heuristics_versus_meta-heuristics

 

 

*************************************

Stochastic optimization

http://en.wikipedia.org/wiki/Stochastic_search

 

 

Randomized search methods

“On the other hand, even when the data is exact, it is sometimes beneficial to deliberately introduce randomness into the search process as a means of speeding convergence and making the algorithm less sensitive to modeling errors. Further, the injected randomness may provide the necessary impetus to move away from a local solution when searching for a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly”

http://en.wikipedia.org/wiki/Stochastic_search#Randomized_search_methods

 

 

*************************************

swarm intelligence

 

swarm intelligence

swarn search

particle swarm

Particle

 

 

Swarm intelligence

Swarm intelligence (SI) describes the collective behaviour of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.[1]

SI systems are typically made up of a population of simple agents or boids interacting locally with one another and with their environment. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. Natural examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.”

http://en.wikipedia.org/wiki/Swarm_intelligence

 

 

Inteligencia de enjambre

“La inteligencia de enjambre es una rama de la Inteligencia artificial que se basa en el comportamiento colectivo de sistemas descentralizados y auto-organizados

Los sistemas de inteligencia de enjambre están constituidos típicamente de agentes simples que interactúan entre ellos y con su ambiente. Los agentes siguen reglas simples y, aunque no existe una estructura de control que dictamine el comportamiento de cada uno de ellos, las interacciones locales entre los agentes conduce a la emergencia de un comportamiento global complejo. Ejemplos en la naturaleza incluyen colonias de hormigas, alineamiento de aves en vuelo, comportamiento de rebaños, crecimiento bacteriano y comportamiento de cardumenes.”

http://es.wikipedia.org/wiki/Inteligencia_de_enjambre

 

 

*******************************************************************

Particular Methods   /   Métodos Particulares

 

*************************************

Particle swarm

 

Particle swarm optimization

Particle swarm optimization (PSO) is a method for performing numerical optimization without explicit knowledge of the gradient of the problem to be optimized. PSO is originally attributed to Kennedy, Eberhart and Shi [1] [2] and was first intended for simulating social behaviour. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart [3] describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli [4] [5].

PSO optimizes a problem by maintaining a population of candidate solutions called particles and moving these particles around in the search-space according to simple formulae. The movements of the particles are guided by the best found positions in the search-space, which are continually updated as better positions are found by the particles.”

http://en.wikipedia.org/wiki/Particle_swarm_optimization

 

 

*************************************

Ant colony algorithms /  algoritmo de hormigas

Ant colony optimization 

 

 

Ant colony optimization

“The ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.

This algorithm is a member of ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis,[1][2] the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants.”

http://en.wikipedia.org/wiki/Ant_colony_optimization

 

Algoritmo hormiga

“El algoritmo hormiga o algoritmo de las hormigas es una técnica probabilística utilizada para solucionar problemas de cómputo; este algoritmo está inspirado en el comportamiento que presentan las hormigas para encontrar las trayectorias desde la colonia hasta el alimento”

http://es.wikipedia.org/wiki/Algoritmo_hormiga

 

 

*************************************

Simulated annealing (SA) o recocido simulado

 

Simulated annealing

Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution.

The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.

By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local optima—which are the bane of greedier methods.

The method was independently described by Scott Kirkpatrick, C. Daniel Gelatt and Mario P. Vecchi in 1983,[1] and by Vlado Černý in 1985.[2] The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953.[3]

http://en.wikipedia.org/wiki/Simulated_annealing

 

 

Simulated annealing

Simulated annealing (SA) o recocido simulado es un algoritmo de búsqueda meta-heurística para problemas de optimización global, es decir, encontrar una buena aproximación al óptimo global de una función en un espacio de búsqueda grande.

El nombre e inspiración viene del proceso de recocido del acero, una técnica que consiste en calentar y luego enfriar controladamente un material para aumentar el tamaño de sus cristales y reducir sus defectos. El calor causa que los átomos se salgan de sus posiciones iniciales (un mínimo local de energía) y se muevan aleatoriamente; el enfriamiento lento les da mayores probabilidades de encontrar configuraciones con menor energía que la inicial.”

http://es.wikipedia.org/wiki/Simulated_annealing

 

 

*************************************

Tabu search  /  Búsqueda tabú

 

Búsqueda tabú

“La búsqueda tabú es un método de optimización matemática, perteneciente a la clase de técnicas de búsqueda local. La búsqueda tabú aumenta el rendimiento del método de búsqueda local mediante el uso de estructuras de memoria: una vez que una potencial solución es determinada, se la marca como "tabú" de modo que el algoritmo no vuelva a visitar esa posible solución. La búsqueda tabú es atribuída a Fred Glover.”

http://es.wikipedia.org/wiki/B%C3%BAsqueda_tab%C3%BA

 

Tabu search

Tabu search is a mathematical optimization method, belonging to the class of local search techniques. Tabu search enhances the performance of a local search method by using memory structures: once a potential solution has been determined, it is marked as "taboo" ("tabu" being a different spelling of the same word) so that the algorithm does not visit that possibility repeatedly. Tabu search is attributed to Fred W. Glover.”

http://en.wikipedia.org/wiki/Tabu_search

 

 

*************************************

Harmony search

 

Harmony search

Harmony search (HS) is a phenomenon-mimicking algorithm (also known as metaheuristic algorithm, soft computing algorithm or evolutionary algorithm) inspired by the improvisation process of musicians. In the HS algorithm, each musician (= decision variable) plays (= generates) a note (= a value) for finding a best harmony (= global optimum) all together.”

http://en.wikipedia.org/wiki/Harmony_search

 

Harmony Search Algorithm.

This site introduces music-inspired Harmony Search algorithm and its applications.

http://sites.google.com/a/hydroteq.com/www/

 

 

*************************************

Beam search

Beam search is a heuristic search algorithm that is an optimization of best-first search that reduces its memory requirement. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic which attempts to predict how close a partial solution is to a complete solution (goal state). In beam search, only a predetermined number of best partial solutions are kept as candidates.[1]

http://en.wikipedia.org/wiki/Beam_search

 

 

*************************************

String searching algorithm

Wikipedia ()20130306)

“In computer science, string searching algorithms, sometimes called string matching algorithms, are an important class of string algorithms that try to find a place where one or several strings (also called patterns) are found within a larger string or text.

Let Σ be an alphabet (finite set). Formally, both the pattern and searched text are vectors of elements of Σ. The Σ may be a usual human alphabet (for example, the letters A through Z in the Latin alphabet). Other applications may use binary alphabet (Σ = {0,1}) or DNA alphabet (Σ = {A,C,G,T}) in bioinformatics.”

http://en.wikipedia.org/wiki/String_searching_algorithm

 

Algoritmos de búsqueda de subcadenas

Wikipedia ()20130306)

“A este tipo de algoritmos también se les llama Algoritmos de patrones en un texto, algoritmos de emparejamiento de secuencias, algoritmos de casamiento de secuencias o simplemente por su nombre en inglés string matching. Este tipo de algoritmos persiguen encontrar subcadena/s con alguna propiedad en una cadena de caracteres.”

http://es.wikipedia.org/wiki/Algoritmos_de_b%C3%BAsqueda_de_subcadenas

 

 

KMP algorithm, Knuth–Morris–Pratt algorithm

Algoritmo KMP, Algoritmo Knuth-Morris-Pratt

 

KMP algorithm

Knuth–Morris–Pratt algorithm

Wikipedia ()20130306)

“In computer science, the Knuth–Morris–Pratt string searching algorithm (or KMP algorithm) searches for occurrences of a "word" W within a main "text string" S by employing the observation that when a mismatch occurs, the word itself embodies sufficient information to determine where the next match could begin, thus bypassing re-examination of previously matched characters.

The algorithm was conceived in 1974 by Donald Knuth and Vaughan Pratt, and independently by James H. Morris. The three published it jointly in 1977.”

http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm

 

Algoritmo KMP

Algoritmo Knuth-Morris-Pratt

Wikipedia ()20130306)

“El algoritmo KMP es un algoritmo de búsqueda de subcadenas simple y por lo tanto su objetivo es buscar la existencia de una subcadena dentro de una cadena. Para ello utiliza información basada en los fallos previos, aprovechando la información que la propia palabra a buscar contiene de sí (sobre ella se precalcula una tabla de valores), para determinar donde podría darse la siguiente existencia, sin necesidad de analizar más de 1 vez los caracteres de la cadena donde se busca.

El algoritmo originalmente fue elaborado por Donald Knuth y Vaughan Pratt y de modo independiente por James H. Morris en 1977, pero lo publicaron juntos los tres.”

http://es.wikipedia.org/wiki/Algoritmo_Knuth-Morris-Pratt

 

 

*******************************************************************

*******************************************************************

Papers  /  Artículos

 

 

*******************************************************************

Swarn search

 

*************************************

Visualizing the search process of particle swarm optimization

Yong-Hyuk Kim, Kang Hoon Lee, Yourim Yoon

July 2009

GECCO '09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation

Publisher: ACM

ABSTRACT

“It is a hard problem to understand the search process of particle swarm optimization over high-dimensional domain. The visualization depicts the total search process and then it will allow better understanding of how to tune the algorithm. For the investigation, we adopt Sammon's mapping, which is a well-known distance-preserving mapping. We demonstrate the usefulness of the proposed methodology by applying it to some function optimization problems.”

http://portal.acm.org/citation.cfm?id=1569901.1569909&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text available:

PdfPdf (683.40 KB)

http://portal.acm.org/ft_gateway.cfm?id=1569909&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

PDF

Visualizing the search process of particle swarm optimization

http://cg.kw.ac.kr/kang/visual_pso/visual_pso.pdf

 

 

*************************************

Dynamic-probabilistic particle swarms

James Kennedy, US Bureau of Labor Statistics, Washington DC, June 2005

Genetic And Evolutionary Computation Conference
GECCO '05: Proceedings of the 2005 conference on Genetic and evolutionary computation

Washington DC, USA

SESSION: Ant colony optimization and swarm intelligence, Pages: 201 - 207  

Publisher: ACM

ABSTRACT

“The particle swarm algorithm is usually a dynamic process, where a point in the search space to be tested depends on the previous point and the direction of movement. The process can be decomposed, and probability distributions around a center can be used instead of the usual trajectory approach. A version that is both dynamic and Gaussian looks very promising.”

http://portal.acm.org/citation.cfm?id=1068009.1068040&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text available:

PdfPdf (1.11 MB)

http://portal.acm.org/ft_gateway.cfm?id=1068040&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

PDF

Dynamic-probabilistic particle swarms

http://www.cs.bham.ac.uk/~wbl/biblio/gecco2005/docs/p201.pdf

 

 

*************************************

Scalability of particle swarm algorithms

Sébastien Piccand, University of Limerick, Limerick, Ireland

 Michael O'Neill, University College Dublin, Ireland

Jacqueline Walker, University of Limerick, Limerick, Ireland

Genetic And Evolutionary Computation Conference
Proceedings of the 9th annual conference on Genetic and evolutionary computation

London, England, 2007

Publisher: ACM  New York, NY, USA

ABSTRACT

“When dealing with complex optimisations problems, evolutionary computation techniques have proven tobe very helpful. Amongst optimisation algorithms driven by evolutionary computation techniques,particle swarm algorithms have proven to be a very good alternative to genetic algorithms because of their faster convergence. However they can still suffer from premature convergence to local optima. Premature convergence occurs when the particles of the swarm are too close to each other to enable further exploration of the space. To put it another way, the dispersion or distribution of the swarm throughout the search space has been localised to a small region with a consequent stagnation of the search process. Many strategies have been used to try to prevent convergence to a local optimum. However little work has been done on problems of high dimensions. By high dimensions, we mean dimensions of 100 and above.This paper focuses, therefore, on the limitations of classical particle swarm algorithms when dealing with high dimensional problems. We compare different versions of Particle Swarm Algorithms: GBest, LBest with ringor random neighbourhood of size 3~\cite{Clerc06}, and GCPSO~\cite{VanDenBergh06}. Those algorithmswere run twice, with a linearly decreasing inertia weight, and with the use of a constriction factor.We also used two repulsive-based algorithms: Charged PSO~\cite{Blackwell02} and Predator Prey~\cite{Silva02}.Our main focus is problems of high dimensionality. In particular, we applied the different algorithmsto the benchmarks functions Ackley and Rosenbrock, in the following dimensions: 100, 250, 500.Even though these represent problems of relatively small dimensionality in a real-world context,experiments on higher dimensions have not been necessary to show the limits of the algorithms studied.Each experiment was run 20 times. The swarm size was chosen from $[30\; 50\; 100\; 250\; 500]$ and so that it isnot greater than the problem size. Each experiment is 10000 iterations long.A one-way analysis of variance (ANOVA) was used to compare the performance of each algorithms. We found that the LBest algorithms perform significantly better when used with the constriction factor.GBest and GCPSO perform better with linearly decreasing inertia with a small swarm size, but better with the constriction factorwith a big swarm size. The improvement of GCPSO on GBest is not statistically significant in our experiments. The LBest algorithms with the constriction factor seem to be the best algorithms to handle problems of high dimensionality. The LBest algorithm with fixed neighbourhood seems to be less sensitive to the size of the swarm than the LBest algorithm with randomneighbourhood. Especially, in the case of the Rosenbrock function of size 500, increasing the sizeof the swarm does not improve the performance of LBest with constricted factor and fixed neighbourhood.The algorithms based on repulsion between particles, i.e. Charged Swarm and Predator Prey, do not perform very well.Indeed, even if the predator prey algorithm gives quite good results, it is trapped in a local optimum, as the fitness value stagnates on a constant value for the last 50\% of iterations. This may come from a too low levelof repulsion. Tuning the parameters used for repulsion seems to be very important for high dimensionality problems. Experiments show that almost all the algorithms managed to solve the problems for dimension 100 butnone of the algorithms managed to solve the problem in the case of problems of dimension 250 and 500. The LBest algorithm with random neighbourhood and constriction factor performed the best. Further work will be done on modelling the size of the swarm required to be able to solve the problems. Other particle swarm algorithms will also be included.”

http://portal.acm.org/citation.cfm?id=1276958.1276993&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text

PdfPdf (56 KB)

http://portal.acm.org/ft_gateway.cfm?id=1276993&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

PDF

http://www.cs.york.ac.uk/rts/docs/GECCO_2007/docs/p179.pdf

 

 

*************************************

Evolutionary swarm design of architectural idea models

Sebastian von Mammen, Christian Jacob

University of Calgary, Calgary, AB, Canada

Genetic And Evolutionary Computation Conference
Proceedings of the 10th annual conference on Genetic and evolutionary computation

Atlanta, GA, USA

SESSION: Ant colony optimization, swarm intelligence, and artificial immune systems

2008

Publisher

ACM  New York, NY, USA

ABSTRACT

“In this paper we present a swarm grammar system that makes use of bio-inspired mechanisms of reproduction, communication and construction in order to build three-dimensional structures. Ultimately, the created structures serve as idea models that lend themselves to inspirations for architectural designs.

Appealing design requires structural complexity. In order to computationally evolve swarm grammar configurations that yield interesting architectural models, we observe their productivity, coordination, efficiency, and their unfolding diversity during the simulations. In particular, we measure the numbers of collaborators in each swarm individual's neighborhood, and we count the types of expressed swarm agents and built construction elements. At the end of the simulation the computation time is saved and the created structures are rated with respect to their approximation of pre-defined shapes. These ratings are incorporated into the fitness function of a genetic algorithm. We show that the conducted measurements are useful to direct an evolutionary search towards interesting yet well-constrained architectural idea models.”

http://portal.acm.org/citation.cfm?id=1389095.1389115&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text

PdfPdf (1.05 MB)

http://portal.acm.org/ft_gateway.cfm?id=1389115&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

PDF

http://vonmammen.org/science/t01pap320-vonmammen.pdf

 

 

*************************************

A new collaborative evolutionary-swarm optimization technique

Rodica Ioana Lung, D. Dumitrescu

Babes-Bolyai University, Cluj-Napoca, România

Genetic And Evolutionary Computation Conference
Proceedings of the 2007 GECCO conference companion on Genetic and evolutionary computation

WORKSHOP SESSION: Evolutionary algorithms for dynamic optimization problems (EvoDOP)

London, United Kingdom, 2007

Publisher

ACM  New York, NY, USA

ABSTRACT

“A new hybrid approach to optimization in dynamical environments called Collaborative Evolutionary-Swarm Optimization (CESO) is presented. CESO tracks moving optima in a dynamical environment by combining the search abilities of an evolutionary algorithm for multimodal optimization and a particle swarm optimization algorithm. A collaborative mechanism between the two methods is proposed by which the diversity provided by the multimodal technique is transmitted to the particle swarm in order to prevent its premature convergence. Numerical experiments indicate CESO as an efficient method compared with other evolutionary approaches.”

http://portal.acm.org/citation.cfm?id=1274000.1274043&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text

PdfPdf (151 KB)

http://portal.acm.org/ft_gateway.cfm?id=1274043&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

PDF

http://www.cs.bham.ac.uk/~wbl/biblio/gecco2007/docs/p2817.pdf

 

 

*************************************

Chaotic particle swarm optimization

Yanxia Sun, Guoyuan Qi, Zenghui Wang, Barend Jacobus van Wyk, Yskandar Hamam

Tshwane University of Technology, Pretoria, South Africa

ACM/SIGEVO Summit on Genetic and Evolutionary Computation
Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation

Shanghai, China, 2009

Publisher

ACM  New York, NY, USA

ABSTRACT

“A new particle swarm optimization (PSO) algorithm with has a chaotic neural network structure, is proposed. The structure is similar to the Hopfield neural network with transient chaos, and has an improved ability to search for globally optimal solution and does not suffer from problems of premature convergence. The presented PSO model is discrete-time discrete-state. The bifurcation diagram of a particle shows that it converges to a stable fixed point from a strange attractor, guaranteeing system convergence.”

http://portal.acm.org/citation.cfm?id=1543834.1543902&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text

PdfPdf (1.26 MB)

http://portal.acm.org/ft_gateway.cfm?id=1543902&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

*************************************

Space transformation search: a new evolutionary technique

Hui Wang, Wuhan University, Wuhan, China

Zhijian Wu, Wuhan University, Wuhan, China

Yong Liu, University of Aizu, Fukushima, Japan

Jing Wang, Wuhan University, Wuhan, China

Dazhi Jiang, Wuhan University, Wuhan, China

Lili Chen, Wuhan University, Wuhan, China

ACM/SIGEVO Summit on Genetic and Evolutionary Computation
Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation

Shanghai, China, 2009

ACM  New York, NY, USA

ABSTRACT

“In this paper, a new evolutionary technique is proposed, namely space transformation search (STS), which transforms current search space to a new search space. By simultaneously evaluating solutions in current search space and transformed space, we can provide more chances to find solutions more closely to the global optimum and finally accelerate convergence speed. The proposed STS method can be applied to many evolutionary algorithms, and this paper only presents a STS based particle swarm optimization (PSO-STS). Experimental studies on 20 benchmark functions including 10 shifted functions show that the PSO-STS and its variations can not only achieve better results, but also obtain faster convergence speed than the standard PSO.”

http://portal.acm.org/citation.cfm?id=1543834.1543907&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text

PdfPdf (391 KB)

http://portal.acm.org/ft_gateway.cfm?id=1543907&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

*************************************

Geometric differential evolution

Alberto Moraglio, university of coimbra, coimbra, Portugal

Julian Togelius, Dalle Molle Institute for Artificial Intelligence, Manno-Lugano, Switzerland

Genetic And Evolutionary Computation Conference
Proceedings of the 11th Annual conference on Genetic and evolutionary computation

Montreal, Québec, Canada, 2009

ISBN:978-1-60558-325-9

Publisher: ACM  New York, NY, USA

ABSTRACT

“Geometric Particle Swarm Optimization (GPSO) is a recently introduced formal generalization of traditional Particle Swarm Optimization (PSO) that applies naturally to both continuous and combinatorial spaces. Differential Evolution (DE) is similar to PSO but it uses different equations governing the motion of the particles. This paper generalizes the DE algorithm to combinatorial search spaces extending its geometric interpretation to these spaces, analogously as what was done for the traditional PSO algorithm. Using this formal algorithm, Geometric Differential Evolution (GDE), we formally derive the specific GDE for the Hamming space associated with binary strings and present experimental results on a standard benchmark of problems.”

http://portal.acm.org/citation.cfm?id=1569901.1570130&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

Full text

PdfPdf (523 KB)

http://portal.acm.org/ft_gateway.cfm?id=1570130&type=pdf&coll=ACM&dl=ACM&CFID=98877147&CFTOKEN=69726819

 

 

Geometric Differential Evolution

by Alberto Moraglio ,  Centre For Informatics ,  Julian Togelius

Download:
Abstract:

“Geometric Particle Swarm Optimization (GPSO) is a recently introduced formal generalization of traditional Particle Swarm Optimization (PSO) that applies naturally to both continuous and combinatorial spaces. Differential Evolution (DE) is similar to PSO but it uses different equations governing the motion of the particles. This paper generalizes the DE algorithm to combinatorial search spaces extending its geometric interpretation to these spaces, analogously as what was done for the traditional PSO algorithm. Using this formal algorithm, Geometric Differential Evolution (GDE), we formally derive the specific GDE for the Hamming space associated with binary strings and present experimental results on a standard benchmark of problems. 1.”

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.157.8691

 

PDF

http://julian.togelius.com/Moraglio2009Geometric.pdf

 

 

*******************************************************************

Space search

 

*************************************

Probabilistic state space search

Andreas Kuehlmann, IBM T. J. Watson Research Center, Yorktown Heights, NY

Kenneth L. McMillan, Cadence Design Systems, Berkeley, CA

Robert K. Brayton, University of California at Berkeley, Berkeley, CA

International Conference on Computer Aided Design
Proceedings of the 1999 IEEE/ACM international conference on Computer-aided design,
San Jose, California, United States, Pages: 574 – 579, 1999

ISBN:0-7803-5832-5

Publisher

IEEE Press  Piscataway, NJ, USA

ABSTRACT

“This paper describes a probabilistic approach to state space search. The presented method applies a ranking of the design states according to their probability of reaching a given target state based on a random walk model. This ranking can be used to prioritize an explicit or partial symbolic state exploration to find a trajectory from a set of initial states to a set of target states. A symbolic technique for estimating the reachability probability is described which implements a smooth trade-off between accuracy and computing effort. The presented probabilistic state space search complements incomplete verification methods which are specialized in finding errors in large designs.”

http://portal.acm.org/citation.cfm?id=339492.340079

 

Full text

PdfPdf (353 KB)

http://portal.acm.org/ft_gateway.cfm?id=340079&type=pdf&coll=ACM&dl=ACM&CFID=96636536&CFTOKEN=68495952

 

 

PDF

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5295&rep=rep1&type=pdf

 

 

*************************************

Space transformation search: a new evolutionary technique

Hui Wang, Wuhan University, Wuhan, China

Zhijian Wu, Wuhan University, Wuhan, China

Yong Liu, University of Aizu, Fukushima, Japan

Jing Wang, Wuhan University, Wuhan, China

Dazhi Jiang, Wuhan University, Wuhan, China

Lili Chen, Wuhan University, Wuhan, China

ACM/SIGEVO Summit on Genetic and Evolutionary Computation
Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, Shanghai, China, 2009

Publisher

ACM  New York, NY, USA

ABSTRACT

“In this paper, a new evolutionary technique is proposed, namely space transformation search (STS), which transforms current search space to a new search space. By simultaneously evaluating solutions in current search space and transformed space, we can provide more chances to find solutions more closely to the global optimum and finally accelerate convergence speed. The proposed STS method can be applied to many evolutionary algorithms, and this paper only presents a STS based particle swarm optimization (PSO-STS). Experimental studies on 20 benchmark functions including 10 shifted functions show that the PSO-STS and its variations can not only achieve better results, but also obtain faster convergence speed than the standard PSO.”

http://portal.acm.org/citation.cfm?id=1543907&dl=GUIDE&coll=GUIDE&CFID=96757023&CFTOKEN=47040450

 

Full text

PdfPdf (391 KB)

http://portal.acm.org/ft_gateway.cfm?id=1543907&type=pdf&coll=ACM&dl=ACM&CFID=96636536&CFTOKEN=68495952

 

 

*************************************

t-Spanners for metric space searching

Gonzalo Navarro, Center for Web Research, Department of Computer Science, University of Chile, Blanco Encalada 2120, Santiago, Chile

Rodrigo Paredes, Center for Web Research, Department of Computer Science, University of Chile, Blanco Encalada 2120, Santiago, Chile

Edgar Chávez, Escuela de Ciencias Físico-Matemáticas, Universidad Michoacana, Morelia, Mexico

Data & Knowledge Engineering, Volume 63, Issue 3, Pages: 820-854, (December 2007)

Publisher

Elsevier Science Publishers B. V.  Amsterdam, The Netherlands, The Netherlands

ABSTRACT

“The problem of Proximity Searching in Metric Spaces consists in finding the elements of a set which are close to a given query under some similarity criterion. In this paper we present a new methodology to solve this problem, which uses a t-spanner G'(V,E) as the representation of the metric database. A t-spanner is a subgraph G'(V,E) of a graph G(V,A), such that E@?A and G' approximates the shortest path costs over G within a precision factor t. Our key idea is to regard the t-spanner as an approximation to the complete graph of distances among the objects, and to use it as a compact device to simulate the large matrix of distances required by successful search algorithms such as AESA. The t-spanner properties imply that we can use shortest paths over G' to estimate any distance with bounded-error factor t. For this sake, several t-spanner construction, updating, and search algorithms are proposed and experimentally evaluated. We show that our technique is competitive against current approaches. For example, in a metric space of documents our search time is only 9% over AESA, yet we need just 4% of its space requirement. Similar results are obtained in other metric spaces. Finally, we conjecture that the essential metric space property to obtain good t-spanner performance is the existence of clusters of elements, and enough empirical evidence is given to support this claim. This property holds in most real-world metric spaces, so we expect that t-spanners will display good behavior in most practical applications. Furthermore, we show that t-spanners have a great potential for improvements.”

http://portal.acm.org/citation.cfm?id=1290202.1294388&coll=GUIDE&dl=GUIDE&CFID=99056949&CFTOKEN=85549470

 

 

t-Spanners for metric space searching

Index

http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TYX-4NWCD7F-1&_user=10&_coverDate=12%2F31%2F2007&_alid=1418971982&_rdoc=1&_fmt=high&_orig=search&_cdi=5630&_sort=r&_docanchor=&view=c&_ct=45&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=ecc1a6bdf5162912aa38e4ad4538b3c5

 

 

*******************************************************************

Speeding up spatial approximation search in metric spaces

Karina Figueroa, Universidad Michoacana, México

Edgar Chavez, Universidad Michoacana/CICESE, México

Gonzalo Navarro, University of Chile, Chile

Rodrigo Paredes, University of Chile, Chile

Journal of Experimental Algorithmics (JEA)

Volume 14, (December 2009)

Publisher; ACM New York, NY, USA

ABSTRACT

“Proximity searching consists of retrieving from a database those elements that are similar to a query object. The usual model for proximity searching is a metric space where the distance, which models the proximity, is expensive to compute. An index uses precomputed distances to speedup query processing. Among all the known indices, the baseline for performance for about 20 years has been AESA. This index uses an iterative procedure, where at each iteration it first chooses the next promising element (“pivot”) to compare to the query, and then it discards database elements that can be proved not relevant to the query using the pivot. The next pivot in AESA is chosen as the one minimizing the sum of lower bounds to the distance to the query proved by previous pivots. In this article, we introduce the new index iAESA, which establishes a new performance baseline for metric space searching. The difference with AESA is the method to select the next pivot. In iAESA, each candidate sorts previous pivots by closeness to it, and chooses the next pivot as the candidate whose order is most similar to that of the query. We also propose a modification to AESA-like algorithms to turn them into probabilistic algorithms.

Our empirical results confirm a consistent improvement in query performance. For example, we perform as few as 60% of the distance evaluations of AESA in a database of documents, a very important and difficult real-life instance of the problem. For the probabilistic algorithm, we perform in a database of faces up to 40% of the comparisons made by the best alternative algorithm to retrieve the same percentage of the correct answer. Based on the empirical results, we conjecture that the new probabilistic AESA-like algorithms will become, as AESA had been for exact algorithms, a reference point establishing, in practice, a lower bound on how good a probabilistic proximity search algorithm can be.”

http://portal.acm.org/citation.cfm?id=1498698.1564506

 

Full text

PdfPdf (967 KB)

http://portal.acm.org/ft_gateway.cfm?id=1564506&type=pdf&coll=ACM&dl=ACM&CFID=96636536&CFTOKEN=68495952

 

 

PDF

www.dcc.uchile.cl/~gnavarro/ps/jea09.pdf

 

 

*******************************************************************

 

Artificial Immune System

 

immune evolutionary algorithm

 

 

artificial learning system

 

Evolutionary Reinforcement Learning

 

*******************************************************************