Research in Computer Science and Informatics at the University of Portsmouth

Additional information examples: Southampton

OtherDetails (Southampton)
This paper shows how and why today's very widespread work on consciousness in artificial intelligence, robotics and cognitive science will have to give way to the Turing Test. Many practical and philosophical issues are rigorously treated and clarified. The influence of this paper will continue to grow before it reaches its full impact, because its methodological implications are highly interdisciplinary, unifying work on consciousness and on turing testing. Citations in Google Scholar: 9 Search on (harnad "turing test") Google Books: 30; Scholar: 426; Google: 19,900. Search on (harnad (conscious OR consciousness)) Google Books: 118; Scholar: 1930; Google: 61,100.
This paper presents the psychometric model,methodology and motivation for testing and validating RAE metrics against RAE panel rankings using multiple regression analysis. It was presented as an invited keynote paper to the International Society for Scientometrics and Informetrics alongside another invited keynote by the father of scientometrics and the founder of the Institute for Scientific Information (Web of Science), Eugene Garfield. Our earlier 2005 demonstration that articles freely accessible on the web have 25%-250% greater citation impact, across the physical, biological, engineering and social sciences, is proving highly influential in both university and research funder policy-making on open access as well as research assessment policy, both in the UK and worldwide: Citation is just one of many new web-based metrics that will emerge as result of our ongoing scientometric work. Citations Google Scholar: 9 Search (harnad ("research assessment" OR scientometric)): Google Books: 10; Scholar: 289; Google: 28,900
Traditionally formal refinement approaches are applied to closed computer systems. This paper significantly pushes the boundary of how refinement is applied by including abstractions of environmental behaviour in the formal resaoning, in this case intruders in the environment. This paper has inspired others to use refinement in similar broader ways, e.g., Abrial (Zurich), Fidge (Brisbane), Reed & Sinclair (UK), George (Macau), Pahl (Dublin).
UML-B and the associated U2B tool are designed to make the use of the B formalism more appealing to software engineers. This is bourne out by the enthusiastic use of the approach and tools by industrial collaborators. e.g., Nokia, Praxis, ATEC, Volvo, KeesDA. UML-B and U2B were developed by Butler and Snook and this paper provides a comprehensive description of both.
Correct design of long-runnning transactions is a major challenge in distributed systems and many researchers have proposed formal approaches to this problem. This paper proposes a language and semantic model which defines a clear semantics for compensation. It is the first paper to define a correctness criteria for compensation and a modular verification approach for compensation. It is influencing the work on process calculi for transactions (e.g., Montanari, Laneve, Zavattaro) and programming language support for transactions (Vaucoleur in Zurich). This appeared in an invitation-only special edition of LNCS which was peer reviewed.
This paper won the Best Paper prize at ACM Hypertext'01 which is the premier international conference for this type of work. Samhaa El-Beltagy was my PhD student and we co-wrote the paper. It has 52 citations in Google Scholar
Major review paper defining the grand challenge of "Memories for Life" which evolved from the Foresight Cognitive Systems Project. Paper reviews the state-of the-art in research work on memory in different disciplines involved such as neuroscience, psychology, computer science and cultural heritage. The sections on the different topics were written by the subject experts including myself and threaded together by O'Hara. This was the most down-loaded review from the Interface website in 2006 (572 downloads).
In this paper we investigate the feasibility and potential benfits of bringing together the P2P paradigm with the concept of hypermedia link services to implement ad hoc resource sharing on the Web. This illustrates how the P2P approach of the pioneering Microcosm hypermedia system can be applied to develop the P2P architectures required for the Semantic Web.
The Web Science Research Initiative, which is a collaborative venture between ECS at Southampton and CSAIL at MIT, was launched in November 2006. We wrote this review paper in the summer of 2006 to define the scope of the new interdiscipinary research discipline we call Web Science, and to provide a roadmap for other researchers to follow.
Building on my previous work in recommender systems and drawing upon emerging Semantic Web technologies, this study conducted in conjunction with my PhD student Middleton evaluated the use of ontologies in user profiling and demonstrated the improvement obtained through use of ontological inference. The work also resulted in two Knowledge Capture conference papers (2001, 2003), a Web conference paper (2002) and a chapter in the "Handbook on Ontologies". This and the 2001 and 2002 conference papers each have over 50 citations in Google Scholar.
Conducted under the Equator IRC, this paper by the Southampton team led by De Roure reports key early work in information systems aspects of pervasive computing, and with associated conference papers helped establish the new research area of "physical hypermedia" - just one other international centre (Aarhus) was active in this space at the time. An ACM conference paper based on this work has 31 citations. It has led to a series of subsequent studies in semantic annotation and the record-and-reuse paradigm in the context of pervasive computing. This information systems perspective on pervasive information systems is now gaining international recognition through a workshop at the Pervasive 2007 conference.
Refinement approaches to formal design of systems focus on preserving correctness with respect to some abstract functional specification. However, non-functional properties, such as performance, are also important in system design. This paper describes the first ever attempt to integrate refinement reasoning with an analytical approach to stochastic performance. In particular it is the first ever formal approach to performance analysis that distinguishes probabilistic choice and nondeterministic choice (essential for refinement) in state based models. In April 2005, this was the article with the highest number of downloads over a 90 day period for the journal.
This work is the culmination of a four year collaboration with C. Queinnec. It brings a concept of first-class resource to programming languages, which allows resource-conscious programs to be developed. This is crucial for PDA applications, where resources are limited. This work resulted in an invited presentation at the ECOOP'02 workshop on resource management. I was lead author on this paper, I wrote the formal specification and implemented it in Java. This paper consolidates results previously published in conferences and workshops, with over 45 citations in Google Scholar.
Using a formalism I conceived for other distributed algorithms, I specified Birrell's distributed reference listing (used in Java RMI), we designed a new graphical representation for it, identified problems with the original specification and proved correctness of the new algorithm. This is significant work because this algorithm is available in millions of Java implementation across the world, and the solutions we have devised allow independent implementations to operate. In subsequent work, this algorithm and a previous one that I designed are used by R. Jones and myself, as the basis of a deeper understanding of all distributed reference counting algorithms.
We propose a novel-information sharing protocol, by which distributed nodes in a volatile network can share information about their states, and learn about global properties of the network. This is a fully decentralised approach, which significantly improves over existing approaches. The paper provides both a formal and empirical evaluation of approach. I was supervisor of the student and PI of the "Mohican" project, which funded the student. This technique is being applied to disaster recovery in the Aladdin project (PI: N. Jennings).
This paper presents a completely novel and original approach to recommender systems: by seeing the user's browser as a shared resource where recommendations can be advertised, we introduce market-based techniques to solve this allocation problem. Market-based techniques are known to be efficient for resource allocation, and we have shown that they also solve efficiently the recommendation problem. Furthermore, this paper was followed by an empirical evaluation of the approach that confirms these results. This work resulted in a keynote at the workshop on Recommender Systems, in 2005. I was supervisor of the student and PI of the "Magnitude" project, which funded the student. This paper and its conference-version predecessor have over 20 citations in Google Scholar.
This paper presents a novel methodology for the evaluation of industrial hypermedia. This is the first demonstration of this approach and shows a statistically significant benefit to the use of this technology in an industrial environment. Two Blue Chip industrial collaborators (Ford Motor Co. & Pirelli Cables) took part in the evaluation and the project was funded by the EPSRC. This work subsequently led to an EU funded project called Helpmate (value 85k). Wills was a co-author and carried out the research and analysis.
This paper challenges the traditional view of hypertext documentation and the publication lifecycle of experimental results, in a grid-enabled scientific environment. The programme of work was undertaken as part of an EU Framework 5 project and our main partners were the Royal College of Surgeons (England). The work described herein laid the foundations for the case study in an EPSRC funded ROPA award (value 147k) and also led to additional funding from JISC (value 150k) to instantiate the ideas into a Virtual Research Environment. The first three authors were the main contributors to this paper and Wills managed the research project.
This paper analyses the rationale and challenges for developing a computer system for reporting the results of multi-centred clinical trials. The work has received additional funding (150k) from JISC to instantiate the ideas into a Virtual Research Environment, which has been piloted within the Wessex NHS region and three local hospitals. Wills was the main author and he also managed the research project.
This paper demonstrates how industrial hypermedia can be used to represent and capture the knowledge intrinsic in the processes and documentation available on the shop floor (for machine operation and maintenance). The research involved a number of large industrial collaborators in both the UK (Ford, Pirelli, Eurotherm) and Denmark. The work was funded from EPSRC and EU grants. Wills was the main author and carried out most of the research and analysis.
The problem of "learning the kernel" within a learning method is an important unsolved problem in machine learning. This paper develops a novel approach to this task that combines the inherent strength of kernel methods with the power of transparent modelling approaches. It was the first approach to develop a kernel selection procedure by using a sparse basis approach. It has led to invited talks (e.g. ICANN2002, SYSID2003, BCSPAR2004, NN2005) and initiated consultancy work (e.g. Pfizer 2002 - contact Howard Ando (Howard.Ando@pfizer.com)). It currently has 38 citations in google scholar.
This was the first paper to develop the idea of kernel-based active shape models for the task of Chinese character recognition. Kernel approaches can extend the traditional linear active shape model to exploit the inherent non-linearity in the character recognition task, offering increased levels of performance. The algorithm achieved the best known performance of 96.5% radical extraction, and subsequently led to a system that achieved the best known performance (93% characters correct) on a publicly accessible database (HITPU, 430,800 Chinese characters). (First author: PhD student)
This is one of many papers arising from the CoAKTinG research project and is based on a conference paper which won a best paper award. The project, led by De Roure, brought together a set of tools emerging from research and development projects across multiple partners to enhance collaboration, taking a semantic annotation approach. The only collaborative tools project in the e-Science programme, the paper raises challenging issues relating to real-time Semantic Web. An earlier paper from this project has 33 citations in Google Scholar. The project followed through to the JISC Virtual Research Environments project Memetic and a further followup project, with further Semantic Web integration and an increasingly diverse user-base, was announced in February 2007.
This paper is an update of the 2001 Technical Report commissioned by the UK e-Science programme which launched the Semantic Grid initiative (this paper has 81 citations in Google Scholar; the original report has 181). This report had very significant impact in the UK e-Science programme, leading to funding streams under this name, and in Grid R&D in Europe (reflected in the reports of the Next Generation Grids Experts Group) and globally through the Open Grid Forum. Through a series of papers, workshops, journal special issues and standards activities, this new research community has been established and best practice is emerging. This paper was presented as a 90 minute keynote address at the European Semantic Web Conference in 2005.
The work described in this paper studies the application of novel knowledge management tools created by the authors as part of a consultancy to a large international law firm (Clifford Chance). The firm became an industrial collaborator on the EPSRC-funded Advanced Knowledge Technologies IRC.
As technical director of the EPrints system on which this work is based, I conducted this work as lead computer scientist as part of my research into Institutional Repositories and Open Access with colleagues in Chemistry and the Digital Library centre in Bath. This paper describes a new method for disseminating experimental results in the chemistry domain; it is a novel contribution to the practice of E-Science extended post-experiment. Funded under the JISC/EPSRC E-Science programme, the work described has been adopted by the UK's National Crystallography Service and has been ratified as an international standard by the International Union of Crystallography. The best practical example in the emerging field of Open Access Institutional Data repositories, the work has led to other data repository projects (IRS, EBank III) and participation in US-led Mellon-funded international interoperability standards (OAI-ORE). The Journal of Chemical Information and Modeling is published by the prestigious Amercian Chemical Society (ACS) and according to ISI data ranks #5 out of 83 Computer Science Interdisciplinary Applications journals.
This paper was the first to develop a probabilistic framework for Support Vector Regression. This enables the state-of-the-art performance of the Support Vector approach to be augmented with the robustness of a probabilistic output, enabling confidence values to be propagated through the predictive system. The theory has been used to inform new bounds (Chang and Lin, Neural Computation 2005) and the resulting algorithm has been used as part of subsequent Southampton projects in the area of "Adaptive Numerical Modelling" sponsored by British Aluminium Plate and Luxfer. It currently has 24 citations in google scholar. (First author: post-doc)
This paper was the first to develop a foundation for Frame kernels and their wavelet specialisations. This enabled a greater understanding of ideas from signal and image processing to be exploited in kernel-based learning algorithms. The central result was used as part of the "Adaptive Hyper- and Multi-Spectral Data Fusion for Target Detection and Tracking" project, which was funded by the UK MoD's Defence Technology Centres initiative (£277k), to produce robust classifiers. Subsequently, this work led to an invited paper for a journal special issue on the theory of machine learning for signal processing. (First author: post-doc, Second author: joint supervisor)
In this paper I describe RDT, my graphical modelling language in which users draw diagrams of their systems (with the aid of a tool) and which formed the basis of my PhD thesis. The language provides users with an appealing and intuitive graphical interface which retains the precision of the more usual process algebra. RDT models may be executed and subjected to exhaustive analysis using model checking. This paper was selected for publication in a special edition of Journal of Systems and Software of the bset papers presented at the 27th IEEE Annual International Computer Software and Applications Conference, Dallas, 2003.
This paper describes an extension I made to the RolEnact modelling language which permits automated simulation of models of software development processes. These simulations permit developers to experiment with the configuration of their process and tune it to meet specific requirements as demanded by the highest levels of CMM. The paper was published in a special edition of the Journal of Systems and Software containing the best papers presented at ProSim2000.
This was an invited paper thanks to my international reputation in the field. It describes the ground-breaking and influential research into high resolution colorimetric imaging. It was agenda-setting as the first direct multispectral imaging system for art and is used as a reference by key researchers in the imaging field. It led to several related European projects. The international team imaging the Archimedes Palimpsest copied the system. The Louvre, HP, National Gallery, MOMA, British Museum and many other researchers continue to use its results. Google Scholar shows 32 citations including Berns.
Building on our studies of the online usage of digital material in institutional and subject-based repositories, this paper was written in conjunction with PhD student Brody and his co-supervisor. This work has been influential in the development of Open Access practice in an international sphere, and has been widely reported before its official publication. It has led to a number of studies in bibliographic citation and online usage, and has resulted in further funded projects analysing the potential for repository usage statistics.
This highly cited paper (114 citations according to Google Scholar) was the result of the EPSRC-funded COHSE project. One of the first activities to investigate the impact of semantic modelling on human efforts to create navigation and linking in the Web (ie link authoring), it informed the EPSRC WiCK and AKT projects. I was the main author and Southampton Principal Investigator of COHSE.
The novel algorithms for content based image retrieval and the associated search engine detailed in this paper have been installed in several prestigious galleries around Europe (including the Victoria and Albert Museum in London and the Louvre centre for research in in Paris). They were also a key initial component in a successful bid for EU funding in the SCULPTEUR project (Total award 2 million Euros).
The paper presents a robust technique for extracting structure and motion of arbitrarily moving arbitrary shapes using a new approach combining motion templates with arbitrary shape description. The GAIT recognition work, of which this was a part, attracted substantial ( 1.2M US $) international funding from DARPA under grant number N68171-01-C-9002.
This paper presents work with a PhD student supervised by Lewis. It presents and evaluates a new approach to automatic texture segmentation and its application to image retrieval. The work has ccontributed to our image handling work in the cultural heritage sector. The approach is a significant part of the system we installed in the V&A museum in London and elsewhere around Europe. It contributed to the work of the ARTISTE European project, the success of which led to the securing of further EU funding in the SCULPTEUR project(Total award 2 million Euros).
This paper presents work with a PhD student supervised by Lewis giving new results in the important area of automatic image annotation and making new observations concerning problems with the benchmarking of such processes.Image annotation of this type offers the possibilty of significant improvements in image retrieval and is an important part of the work on multimedia processing in the IAM group which has secured funding in excess of 1M pounds in the last 4 years.
This paper describes new human interface techniques for augmented reality. It was the first AR paper to use open hypermedia to annotate 3D models. It is based on the work of my PhD student P.Sinclair and was also presented at the main mixed-reality conference ISMAR and Ubicomp. Googe Cites=17 incl. Hansen's review in 06.
High profile paper outlining the challenges ahead through examples in the GlacsWeb research. Contributes to sensor network hardware and algorithms and is especially recognised as a successful implementation of WSNs. Has received much press including BBC, CBC, US radio, ACM, IEEE Spectrum, Slashdot. Was also the first paper to give an overview of environmental sensor network research and their research issues. It led to the formation of a new conference on sensor networks in geoscience which I co-chair at the AGU in the USA. It helped raise international recognition for Southampton in this area, Google this title and the first page is sominated by my research. 43 Google Sch. cites including Culler, Kumar, Mainwaring and projects eg EU RUNES.
This was the first paper to use agents for sensor network behaviour which combined communication and sensing power use. It helped to stimulate interdisciplinary approaches in sensor network research and raise awareness of agent approaches in sensor networks conferences.
Disability Discrimination Legislation requires reasonable adjustments to ensure disabled students are not disadvantaged and this 12 page paper analyses how this can be achieved cost effectively using new Automatic Speech Recognition technologies that make multimedia materials including speech accessible. The paper analyses how this can help receptive communication, as many people only know of these technologies assisting disabled people with writing. This paper analyses work undertaken by Wald in collaboration with IBM and an International collaboration of Universities (including MIT, Stanford, Purdue & Australian National University) to develop and implement the unique technology and research its impact with disabled students and staff in real classrooms throughout the world.
This 12 page paper analyses work by Wald funded by an IBM faculty award and undertaken in collaboration with IBM and an International collaboration of universities (including MIT, Stanford, Purdue & Australian National University) to continue to develop unique Automatic Speech Recognition (ASR) based technologies and research its implementation and impact with students and staff in real classrooms. The paper analyses and provides for researchers and practitioners a non technical overview of one of the key remaining issues with ASR, its problems in getting good accuracy in difficult environments, and the development of a unique system that enables human intermediaries to correct recognition errors as they are created by the ASR system.
This was the first paper that analysed in detail the unique ongoing programme of research being undertaken by Wald in collaboration with IBM and an International collaboration of universities (including MIT, Stanford, Purdue & Australian National University) that is developing and improving Automatic Speech Recognition (ASR) based technologies and researching their impact on students and staff in real classrooms in many countries. A novel method is employed to make ASR transcriptions readable without the dictation of punctuation. The paper had a great impact by encouraging the widespread use of these technologies to bring universal access to communication and learning for both disabled and non-disabled people.
An extended version of this 12 page invited paper will appear in the Journal of Educational Multimedia and Hypermedia 17(2)and is the most recent detailed analysis of the unique system conceived by Wald to overcome the difficulties in using Automatic Speech Recognition (ASR) in difficult environments. This work, part funded by an IBM faculty award, is part of an ongoing programme of research being undertaken in collaboration with IBM and an International consortium of universities (including MIT, Stanford, Purdue & Australian National University) to continue to develop and employ unique ASR based technologies and research its extensive impact on students and staff in real classrooms.
This was the first time that Augmented Reality (AR) had been used as an interface to a Hypermedia System. Millard's role was as co-supervisor to the student (in his capacity as an expert on contextual hypermedia), the student received his PhD for the implementation and evaluation described within the paper. The work helped to found a new theme within the Hypertext Research Community called Physical Hypertext, which subsequently informed Southampton's contribution to the Equator IRC.
This substantial paper (46 pages) brings the domains of Open Hypermedia (OH) and Adaptive Hypermedia(AH) together for the first time, with a unified approach that captures the advantages of each. These two fields have always been considered separate, this paper demonstrates that the advantages of Open Hypermedia can be used within the pragmatic world of Adaptive Hypermedia. This unified Millard's work on contextual hypermedia with the work of Christopher Bailey on adaptive hypermedia, Bailey received his PhD for the implementation and comparison. Earlier publications on this work (in the Adaptive Hypermedia conference) have become important citations within the AH field (29 citations, Google Scholar).
This paper addresses a long running and controversial issue in the Web and Hypertext communities concerning how human users cope with semantic modelling. Millard was principle author and responsible for bringing the research team together. The work is being refined into a number of design patterns for the semantic web and is currently being shaped into an EPSRC proposal (with Millard as Principle Investigator) to explore knowledge elicitation through semantic wikis.
This significant journal article (51 pages) describes the work of the ArtEquAKT project, an important collaboration between the Equator and AKT IRCs. Millard was a senior member of the original research team, in addition to making major contributions to the text. Earlier ArtEquAKT publications have become important citations for research in the automatic document generation and applied semantic web communities (39 citations, Google Scholar), this paper describes the prototype system in detail, and also reports the evaluations and analyses of the systems performance for the first time.
Representative of a body of work on modelling Genetic Algorithms using Statistical Mechanics Approaches. There are about 20 papers on this topic by my collaborators and myself. This work is extremely well known and referenced in a number of text books on Genetic Algorithms. E.g. the very popular book "An Introduction to Genetic Algorithms" by Melonie Mitchell and more recently as chapter 7 in "Genetic Algorithms---Principles and Perspecites: A Guide to GA Theory" (2003) C. R. Reeves and J. E. Rowe.
This provides a rigorous complexity analysis of a problem with multiple minima where an evolutionary algorithm is demonstrated to outperform a hill-climber and simulated annealing. This adds to a body of work where I studied the computational complexity in heuristic search, e.g. Shapiro and Prugel-Bennett, Foga 4 (1997) pp. 101--116, which was the first paper to show a problem in which a GA outperforms a hill-climber by orders of magnitude. This computational complexity approach has become a popular field of research. This paper was in the top 10 most down loaded papers from TCS when it first came out.
The IEEE Computational Intelligence Society awarded this paper the IEEE Transactions on Evolutionary Computation Outstanding 2004 Paper Award. This combines insights from statistical physics and new alogrithms for studying the structure of hard optimisation problems with methods from population dynamics and the author's statistical mechanics approach to modelling Genetic Algorithms. This is the first work to extend theoretical studies on evolutionary algorithm beyond toy problems to instances of NP-hard problems. This paper lead directly to Hallam, J. and Prügel-Bennett, A. (2005) Large Barrier Trees for Studying Search. IEEE TEC 9(4) pp. 385-397 and a paper accepted by Theoretical Computer Science.
This paper represents a body of work in Bioinformatics. The first author was my PhD student, the last author is Professor of Bioinformatics at Copenhagen University. Much of my indisciplinary work has been published in specialist journals such as Bioinformatics 20 pp. 3613-3619 (2004), Nucleic Acids Research 33(19) e171 (2005) and Nature Physics 2 pp. 55-59 (2006).
This was the first paper to analyse the impact of the order in which the issues are negotiated in a multi-agent setting. We showed that this ordering can have a significant impact on the eventual outcome and this led to several subsequent papers (by us and others) dealing with some of the special sub-cases identified in this paper. This paper currently has more than 80 citations in Google Scholar. The research fellow (Fatima) was jointly and equally supervised by Wooldridge and Jennings.
This was the first paper in the field to deal with the issue of making tradeoffs in multi-agent negotiations. This work has formed the basis of several subsequent algorithms developed by other groups around the world and is invariably mentioned when people describe related work in this area (the paper currently has over 110 citations in Google Scholar). The PhD student (Faratin) was jointly and equally supervised by Sierra and Jennings
This is the definitive version of Gaia, one of the earliest and most influential methodologies for agent-oriented software engineering. Gaia has been used by many groups around the world to structure their analysis and design of agent systems and several groups have extended it for their own methodologies (there are currently over 1000 citations to Gaia papers in Google Scholar). This paper was selected as an ISI "New Hot Paper" to reflect its high degree of citation (http://www.esi-topics.com/nhp/2005/may-05-FrancoZambonelli.html). All authors contributed equally.
This paper details a new algorithm for forming coalitions of software agents. In comparison to the previous state of the art (due to Shehory and Kraus, AIJ 98) our algorithm takes less than 0.02% of the time and 0.000006% of the memory. This algorithm is currently being exploited in the Aladdin project (£5.5M EPSRC grant) where it forms the core of our team formation work in the area of emergency response.
This paper presents a novel use of the FIPA agent communication language as a process ontology for use with the OWL Services framework, and demonstrates how this can improve service brokerage and matchmaking within a system for situational awareness and knowledge fusion in a simulated humanitarian relief scenario. The AKTiveSA (Knowledge-Intensive Fusion for Improved Situational Awareness) project, funded to the value of GBP275k as part of the UK MoD's Defence Technology Centres initiative, is informed by and builds upon the work reported in this paper. This paper currently has more than 60 citations in Google Scholar.
This paper describes the CS AKTive Space system, a Semantic Web application which allows the user to explore the state of the computer science discipline within UK HE, and which won the first Semantic Web Challenge in 2003. The system is noteworthy both as an interactive Semantic Web application built on a large real-world dataset, so requiring a scalable reasoning and storage infrastructure, and also as the first application of the mSpace user interface to Semantic Web information. The development of this system led to a project with EPSRC to develop a system for them to monitor the effectiveness of research council-funded interdisciplinary research; EPSRC subsequently used the system to evaluate aspects of the Life Sciences Interface Programme. The system was developed by Gibbins and Harris, and the paper was predominantly written by Gibbins.
This paper reports on consultancy work for EPSRC in which Semantic Web technologies developed within the Advanced Knowledge Technologies IRC were applied to the monitoring of interdisciplinary research funding within the Life Sciences Interface. This work builds on the experience of developing CS AKTive Space, and integrates a set of tools for aligning heterogeneous information (from EPSRC, MRC and BBSRC) with a flexible application framework for the visualisation and examination of research information. All authors contributed equally.
This paper describes a coreference management system for identifying and resolving coreferent individuals from heterogeneous ontologies, an important issue in the development of the Semantic Web. The system presented in this paper uses a novel combination of traditional attribute-based clustering techniques, with techniques for identifying communities of practice (loose behaviour-based networks) amongst the individuals in the ontologies. This paper currently has more than 30 citations in Google Scholar. All authors contributed equally.
This paper substantially extends the range of robots and tasks to which realistic and biologically plausible algorithms of neural plasticity could be applied. The paper originated in a peer reviewed workshop to celebrate the life and achievements of Gray Walters. The Royal Society then selected a subset of these papers through another process of peer review for ultimate publication in its Philosophical Transactions of the Royal Society Series B (impact factor of 2.3). It formed the basis for Shadbolt's contribution to the successful £1,000,000 EPSRC SpiNNaKeR proposal with the University of Manchester to build a new type of processor based on models of neuronal firing.
This paper reported on one of the mid-term outputs of the EPSRC funded IRC AKT project. CS AKTive Space is major system with a principled architecture able to integrate a wide range of information types using "Semantic Web" methods. It won the 2003 International Semantic Web Prize and has served as a blueprint for a further research in a number of new projects (for example the ESPRC/DTI £800,000 Blended Market Insights project, and a number of Data and Information Fusion Defence Technology projects AKTive SA, Semiotiks and Mimex totalling £1,360,000 of funding . Elements of its open source architecture have served as the basis for technologies receiving substantial VC funding (£8,000,000 in its first two rounds) to set up Garlik Ltd (www.garlik.com) of which Shadbolt is Chief Technology Officer. Garlik offers individuals control over digital information appearing about them on the Web and in structured and semi-structured databases.
This paper was the first to demonstrate that the biologically inspired models of plasticity developed by Shadbolt and his colleague could run on robot platforms. These models of plasticity were able to replicate observed developmental phenomena in the brain and had appeared in a range of premier journals in neuroscience as well as computing (The Journal of Neuroscience impact factor 8.2, Cerebral Cortex impact factor 5.3, Neural Computation 1.7, Biological Cybernetics 1.4). Amongst its citations a key one is the first review of this area by Lungarella et al Developmental robotics: a survey, Connection Science, Vol. 15, No. 4, December 2003, 151–190.
This was the first paper to provide a computationally tractable negotiation algorithm for multi-issue bilateral negotiation that can deal with qualitative and quantitative issues. It exploits fuzzy constraint satisfaction techniques to relax preferences in order to produce Pareto optimal outcomes. This paper currently has over 60 citations in Google Scholar. The research fellow (Luo) was jointly supervised by Jennings and Shadbolt.
This paper provides a general mathematical analysis of synaptic normalisation in Hebbian models. It proves, contrary to widespread belief, that a particular form of synaptic normalisation may be used in the presence of positive correlations. It thus overturns the belief that this particular form is inadequate for biological purposes. This paper, together with my Neural Computation 17:2316 (2005) paper (also returned), resulted in my being invited to the Society for Neuroscience meeting in Atlanta, 2006, to speak at a minisymposium on novel approaches to cortical function. Following publication of this paper, Shun-Ichi Amari, of RIKEN in Japan, one of the leaders in the field, e-mailed to congratulate me on the paper.
This paper couples, for the first time, a realistic model of neuronal development, to a neuromorphic, silicon retina. Such coupling demonstrates the robustness of the model to real-world, noisy stimuli. The neuromorphic engineering community are beginning to take realistic models of neuronal plasticity seriously. For example, Prof Kwabena Boahen, at Penn State, is building neuromorphic plasticity chips inspired by my neurotrophic model of synaptic plasticity employed in this paper.
This paper addresses the myth that certain forms of synaptic plasticity in neural networks are inconsistent with experimental data because of their alleged failure to exhibit certain, crucial properties. In particular, this paper overturns this myth by providing an explicit counter-example, demonstrating such a model that does operate correctly. This paper provided the basis for the much more general analysis in my Neural Computation 15:937 (2003) paper (also returned). I was invited to speak on the results in this paper, and my other papers on neurotrophic models, at the recent CNS conference in Edinburgh, July, 2006, but unfortunately was unable to attend due to prior commitments. Moreover, the paper was recently cited by Prof Michael Weliky, Rochester, an up-coming leading light, following his experimental study of the correlational structure of activity patterns in the LGN. That study rules out a large class of popular, highly influential linear Hebbian models, but my models survive unscathed, precisely in virtue of their nonlinear dynamics.
This paper presents a significant re-interpretation of spike timing dependent plasticity, showing that complex learning rules may emerge from temporal and synaptic ensembles, the components of which obey extremely simple rules. This demonstrates that traditional assumptions about what synapses may or may not be required to compute are, if not wrong, at least in need of revision. This paper, together with my Neural Computation 15:937 (2003) paper (also returned), resulted in my being invited to the Society for Neuroscience meeting in Atlanta, 2006, to speak at a minisymposium on novel approaches to cortical function. As a result of that meeting, I have been in discussion with Prof Kevin Fox, Cardiff Biosciences, a leading experimentalist on synaptic plasticity, about this model, and about possible experiments to test it. If the model is correct and the suggested experiments turn out in its favour, a large swathe of competing models will be ruled out.
Presents several approaches for generic methods (not dependent of log-files and other internal parameters of speech engines) to go beyond audio-signal based estimates of confidence for speech recognition output. The most promising technique developed using a probabilistic fusion of audio and language models for decision-making, was benchmarked at 3 percent better than the state-of-the-art at the time of publication. The other semantics based measures introduced in this paper were the first of their kind, and subsequent citations of this paper show that they have found significant use not only for their intended application but also in unanticipated applications such as call routing and spoken dialog systems.
The paper contributes a methodology for incorporating corpus-based statistics in order to ground the ill-posed problem of ontology evaluation. It proposes a departure from top-down approaches to knowledge acquisition and evaluation, and outlines probabilistic hidden variable models that could be adapted. Being the first proposed methodology of its kind, this work has been cited comparatively frequently (Google Scholar shows 34 citations in the first three years since publication), and it has been included in a major authoritative on-line collection of readings on ontology evaluation and validation techniques, organized and hosted by the National Centre for Biomedical Ontology in the USA. This work resulted in a workshop organised in Banff in October 2005 on Ontology Management.
One of the earliest papers on the use of ontologies to track online commmunities, outlining proposals for its usage as well as a formal structure for extending and improving the algorithms used. The tool that was developed has been extensively used in other projects within the Advanced Knowledge Technologies Interdisciplinary Research Collaboration (AKT IRC) and in applications ranging from recommender systems to resolution of co-referent identifiers in the Semantic Web. Google Scholar shows 33 citations to this paper.
This paper provides a critique of the assumptions behind design rationales for medical ontologies, based on an ontology-based system for supporting decision-making in breast cancer. While this paper is too recent to track its impact (journal impact factor of 1.38), it builds on work partially reported in earlier conference publications which had already drawn attention from all the significant projects involved in digital mammography in the UK and the EU. The experience in biomedical ontologies has resulted in a funded EU project on brain tumours, called HealthAgents, worth a total of 3.79M euros.
This paper introduces the first of a series of techniques for scaling the application of retrenchment to larger applications: decomposition. It forms part of a larger body of methodological work on composition and scalability of formal development with refinement and retrenchment, some of which has been submitted for journal publication. This paper - and the REJ paper among others - formed the background to a successful proposal to the British Council/ EGIDE Alliance Partnership Programme in 2003 to collaborate on retrenchment with CNRS-IRIT in Toulouse.
This joint work was an invited contribution to a special issue of REJ, based on our original paper at the Intl. Workshop on Model-Based Requirements Engineering (San Diego, USA, Dec. 2001), which won the workshop best paper award. These papers introduced retrenchment, a new refinement-based formal technique, to the Requirements Engineering community.
This journal article is a milestone in a project applying our retrenchment approach to an industrial-scale problem well-known in the literature: the Mondex electronic purse. The article reviews application problems of classical refinement in the Mondex context, and summarizes our four prior publications on retrenchment and Mondex. The paper demonstrates how retrenchment can address another problem: the balance enquiry quandary. Building on earlier work, this paper also presents a generalized refinement solution to the quandary, something not hitherto thought possible. This retrenchment work is influential since it contributes to the ongoing investigation of Mondex in VSR-Net - an EPSRC Network for the Verified Software Repository. The network is concerned with both theoretical and automated refinement proof techniques applied to Mondex; the article contributes both generalized refinement and retrenchment approaches to proof.
This paper is the archival work covering the nine years of work, and over 20 publications to date on the theory and practice of retrenchment. A critique of classical refinement as a software specification constructor is provided. The theory of retrenchment is defined, motivated and theoretically elaborated, using this critique as a baseline. In particular we cover the utility of retrenchment for requirements engineering and model evolution, as well as its simulation theory and notions of correctness. Application examples from control theory and internet finance (Natwest's Mondex electonic purse) are developed from previous work as templates for users wishing to apply retrenchment.
By focusing on a novel menu access technique, this paper presents a substantive critique and evaluation of traditional methods used to evaluate adaptive interfaces: it provides an alternative design method: adaptable rather than adaptive user interfaces.
Provides a novel and innovative method for design teams who are not experts in the given domain to carry out design elicitation of highly expert, longitudinal and idiosyncratic tasks.This work became an iconic example of how formal usability research can contribute to the evolving eScience program, and was cited in both Science magazine's overview by Hey and Trefethan, 2005 of leading eScience research; it was also selected by Nature in 2005 as an exemplar of leading work addressing problems in digitizing the laboratory. The work is also the basis of the follow on EPSRC best practice grant, myTea, translating this process from chemistry to bioinformatics. The work has also resulted in over a dozen invited talks on this method in the past year alone, from eScience grid researchers, to usability experts and researchers, to the electronic lab book industry.
A novel way to reduce from 13 steps to 2 the automatic capture of user-determined chunks from Web pages into a single new and sharable web page. Won best presentation, WWW2002 Conference, Honolulu, May 2002. It has been adopted by the EPSRC eScience project myTea to support bioinformatitions. It has recently been released as a new plugin for firefox and has been downloaded by over 600 bioinformaticians since Feb 06 to support their work.
This article is part of the first high profile publication setting out the parameters of a new field for the intersection of human computer interaction and information retrieval research called "exploratory search." The article describes mspace, a foundational technology in exploratory search for the exploration of heterogeneous sources. The outcome of this special issue has been two workshops at the leading IR (SIGIR06) and HCI (CHI07) workshops. mSpace is an exemplar model and technology in this space, both in research and in uptake. It is well cited by hypertext, semantic web interaction papers. The technology itself is also well known: the site was "slashdotted" earlier in the year,: the servers running mspace withstood thousands of hits per day during the week; likewise the tech report describing the software is still the most downloaded paper from the ECS archive.
This presents one of the first ontologies (DAML-S / OWL-S) for defining services within the Semantic Web (426 google citations; an earlier version also received 247 citations after being presented at the first Semantic Web Working Symposium), and was instrumental in establishing the Semantic Web Services research field. This collaborative work (authors are alphabetical) introduces the notion of abstracting semantic descriptions for discovery (primarily my contribution), and modeling workflows; similar descriptions have since appeared in other frameworks in the field (including the EU funded WSMO and the International SWSA consortium). This paper has subsequently resulted in invitations on an EU consultation panel and various research panels as an expert on Semantic Web Services.
This was one of the earliest papers to address Semantic Web Service discovery through the use of reasoning over assertions made within a semantic web environment, and was the first to demonstrate and evaluate the applicability of DAML-S Profile definitions, through a proposed approach and an evaluated prototype. Although the order of authors honors the effort involved in developing the prototype, the paper reflects much of my early work on semantic service descriptions, and early theories on matching mechanisms. The paper has since been cited as one of the seminal works in this area (almost 650 citations in Google Scholar).
This paper presents a novel framework for delivering targeted content via public displays to dynamic audiences, exploiting notions of Agent-based marketplaces for Pervasive Computing. It has been operational in several locations since 09/2005. Display logs were used to validate the feasibility of personal device detection, and the framework found to be at least 34% better than two commonly used mechanisms. Subsequent papers have followed, and work is underway with a European SME to deploy this technology, with the EU Commission expressing an interest to fund future work. It has attracted significant media attention internationally (see http://www.ecs.soton.ac.uk/research/projects/BluScreen) and resulted in an invitation to spend several months at the Nokia-MIT Research Institute.
This is one of a select number of papers accepted for a special issue on eScience. The paper is one of the first to detail the many levels of workflow resolution necessary to realise Semantic Grid Workflows, and expanded on the notion (posited in earlier papers on workflow composition using techniques such as DAML-S/OWL-S) that simply composing semantic web services based on their semantics is not enough to achieve pragmatic, workflow executions. The work reflects collaboration between members of Manchester (first four authors) and Southampton (last four authors).
This paper reveals a suprising and previously unobserved phenomenom whereby a genetic algorithm exbibits a critical mutation rate at which the crossover operator and the symmetry of the optimisation problem combine to cause a phase transition. The result gives a novel insight into how the genetic algorithm is searching the problem, it significantly extends the understanding of these algorithms, and for the first time points to the existence of optimal operating parameters for these algorithms.
This paper describe and analyses the use of error-correcting codes to coordinate the behaviour of a team players within a noisy iterated prisoner's dilemma (IPD) competition. This paper represents the first scientific analysis of team play within an IPD tournament, and the method described was used by the authors to win both the 2004 and 2005 20th Anniversary Iterated Prisoner's Dilemma competitions hosted at the 2004 IEEE Congress on Evolutionary Computing (CEC'04) and the 2005 IEEE Symposium on Computational Intelligence and Games (SIG'05).
This paper represents the most comprehensive analysis to date of how discrete bid levels (a common restriction in both real-world and online auctions) affects the revenue generated by an English auction. This issue has largely been ignored by the existing economic literature of auctions, but is increasingly important for the design of efficient online auctions.
This paper presents the first mathematical model of the eBay auction format (earlier working having focused on qualitative and quantitative analysis of auction data) and describes in detail for the first time the effect that the eBay minimum bid increment and proxy bidding system have on the properties of these auctions. It provides insights into how these auctions should be designed and also into the optimal behaviour that bidders should adopt.
This paper presents a new, modular approach to defining modal specification logics for state-based systems modelled as coalgebras. This work has constituted the starting point for a novel approach to model-based verification, applicable to wider classes of systems than existing model-checking techniques. A subsequent EPSRC First Grant application has secured funding to the value of £115000, in order to develop this approach and demonstrate the relevance and applicability of coalgebraic techniques to system specification and verification.
This paper continues the programme of research initiated in [Cirstea, C. (2004), A compositional approach to defining logics for coalgebras, Theoretical Computer Science 327(1)], by developing a systematic approach to defining and logically characterising notions of simulation for systems modelled as coalgebras. These results are relevant to the development of model-based verification techniques for systems modelled as coalgebras, as they allow many notions of system refinement to be treated uniformly. This work is currently being exploited as part of the EPSRC project "A Modular Approach to Model-Based Verification: Logical, Semantical and Algorithmic Support", having the author as Principal Investigator.
The research described in this paper, first reported in [Cirstea, C. (1999) A coequational approach to specifying behaviours. Electronic Notes in Theoretical Computer Science 19] was carried out as part of the author's DPhil thesis, and belongs to the early studies into what constitutes an appropriate specification logic for coalgebras. Since coalgebras are the categorical dual of algebras, much of the early work on coalgebraic logics, including the present work and similar work in [Corradini, A. (1998) A completeness result for equational deduction in coalgebraic specification. Lecture Notes in Computer Science 1376] attempted to dualise results from universal algebra, therefore focusing on equational-style logics for coalgebraic models. The novelty in the present paper was the increased generality of the coalgebraic structures considered, with observer operations with complex result types also being accounted for. Later work on modal logics for coalgebras, e.g. by [Kurz, A. (2000) Logics for Coalgebras and Applications to Computer Science. Doctoral thesis, Ludwig-Maximilians University Munich], drew inspiration from the previously-mentioned works.
Final models are typically used to give semantics to algebraic specifications of systems with hidden state, whose behaviour is only observable through a specified set of observer operations. This paper shows how final semantics can be generalised to specifications comprising both observer and constructor operations. This work was carried out as part of the author's DPhil thesis, and is relevant to the semantics of object-oriented specification languages, where object interfaces consist of constructors on the one hand, and observers (methods and attributes) on the other.
This paper describes the first ever experimental demonstration of enzymatic computing. For half a decade it received little attention---but came to the fore citations from high impact journals incl. PNAS in 2006 when Baron et al. extended my work from a single enzyme to a network of enzymes. Angewandte Chemie Int. Ed. 45(10): 1572-1576, 2006 (cf. http://www.newscientisttech.com/article/dn8767.html). I have received funding for one PhD position through the University's Life Sciences Interfaces Initiative to pump-prime the implementation of enzymatic computing in microfluidic devices in collaboration with Hywel Morgan.
With Hywel Morgan's group, we are now working towards integrating the pioneering work in the use of living cells (Physarum polycephalum plasmodia) as molecular robot controllers in reported in this paper. Within the first two months after it was available on eprints, 2598 copies of the pdf file of this article have been downloaded. The work carried out by PhD student Soichiro Tsuda (now a visiting researcher in my team) and myself in Prof. Guni's laboratory received worldwide press coverage (incl. The Guardian, Spiegel, Frankfurter Zeitung (FAZ), and GEO).Jointly with Prof. Hywel Morgan's group we have recently integrated the cells into bio-electronic hybrid chips (see http://www.newscientist.com/article/dn11875-biosensor-puts-slime-mould-at-its-heart.html).
This is one of (and may be) the earliest implementations of autonomous experimentation applied to biochemistry. It preceded the "Robot Scientist" (King et al., Nature 427:247, 2004) and, in contrast, it operates without human intervention. Naoki Matsumaru worked as MSc student under my supervision, Silvano Colombano from NASA collaborated with us. Our technique is particularly suited for low-bandwidth communication with the experimental apparatus; we are currently developing bespoke hardware and software to implement autonomous experimentation on wireless lab-on-chip sensors to conserve on-board chemical resources.
This paper describes a new conceptual approach to using molecules efficiently in computing architectures. It is the basis of my current research efforts in molecular computing. The presentation of this paper gave rise to me being invited as the most junior academic member to the 2020 Science Group, a group of internationally distinguished scientists tasked with developing a roadmap and comprehensive vision for the convergence of computer science and the other sciences towards 2020 (cf., Nature 440, pp.383-580). Microsoft Research provided me with a 250k Euro grant to develop and implement the approach outlined in this paper.
This was the first paper to carry out investigations into using string kernels for text categorisation. It included efficient methods of string kernel computation and a novel approximation method for string kernels. This is a very widely cited paper in the area of discrete kernels and these types of kernels have been developed further by many researchers and applied to a wide range of different application domains. As a direct result of this research a PhD studentship was awarded to Craig Saunders from GlaxoSmithKline (£69,510) to study kernels for molecular structures.
This paper presented a novel efficient method for learning hierarchical output structures, which made learning for larger datasets more practical than previous approaches. This was one of the first papers in the optimisation for structured output setting, and has subsequently generated a lot of interest. This research contributed a significant basis for the recently awarded EU StreP grant, "Statistical Multilingual Analysis for Retrieval and Text" (total award 2.4 Million Euros) for which Craig Saunders is one of the partners. Many applications of the technology are currently being explored, including function classification of enzymes in conjunction with the University of Helsinki.
This paper won the best paper award for the ECML conference. The paper presented a completely novel application of structured kernels to the problem of identifying famous pianists from the playing style. A new analysis of the Kernel Partial Least Squares was presented and was sucessfully used as a feature selection algorithm, and the resultsing approach outperformed previous methods. Subsequently one of the co-authors was awarded an EPSRC grant which was partially based on this paper.
This paper gave two novel extensions of the string kernel approach: namely moving beyond a letter-based representation and also allowing 'soft' matching between symbolic elements. Both of these ideas have been used by other researchers in applications from chemoinformatics to machine translation. A subsequent grant application based on this paper, a strink kernel paper in the Journal of Machine Learning Research and a poster presentation at Neural Information Processing in 2001 led to the award of an EPSRC grant to Craig Saunders entitled "Development and Application of String-Type Kernels", which totaled £107,058, and was successfully completed in Jan 2006. A second related grant application to EPSRC which extends the reseasrch in that grant and uses it in combination with genetic algorithms is currently under preparation.
This paper describes the first ever work that implements a fully autonomous evolutionary process distributed in a population of physical robots. This approach addresses the serious bottleneck challenges in prior evolutionary robotics(ER) methodology as identified in the ER literature. This article has been cited 25 times (including cites in 5 different journals from Adaptive Behavior to Connection Science). The article concludes a series of related conference papers with a total of 107 citations. The work has been included in 4 different teaching syllabuses on robotics and adaptive behaviour.
This paper discusses various concepts of modularity and how they impact complex dynamical systems. Structural and topological modularity is contrasted with functional, behavioural, and dynamical modularity. It has featured as an invited talk at the Santa Fe Institute for complex systems. The work culminates development of 4 conference papers that have been cited a total of 83 times.
This paper contrasts the differing assumptions about epistasis (fitness interactions between genes/variables) that are normal in evolutionary biology and evolutionary computation, to unify critical properties. This journal is one of the most prestigious journals in its field (impact factor: 3.7) and it's a major breakthrough to be able to cross-over knowledge from evolutionary computation to theory in evolutionary biology/population genetics. In 18 months it has been cited 15 times: twice in Nature/Nature Genetics, twice in Science, and in 10 other journal articles.
This monograph describes the entire evolutionary theory I have been developing for the last 10 years, incorporating the results of more than 20 of my published works (combined citations to those works total 294). It is an interdisciplinary work addressing the celebrated question of ‘why sex?' in evolutionary biology (EB) and the engineering utility of analogous mechanisms in evolutionary computation (EC). This work is the first monograph in the Vienna series in Theoretical Biology. It has sold 400 copies in it first two months after release. The Biosystems journal article, contributing a chapter, has been cited 25 times.
This paper is the first substantial re-evaluation of the Semantic Web since the appearance of the highly influential 2001 Scientific American article (3426 citations from Google Scholar). Since it was deposited the paper is the most downloaded of all Southampton eprints articles. It has been influential in promoting the concept of semantic annotation so that the Web can evolve from a Web of documents to a Web of data. It represents an important statement regarding the key achievements and future directions for this area of research.
This paper distils the results from an influential workshop held in London 2005. It lays out an agenda for research in an emerging, multidisciplinary area the authors have termed Web Science. It is anticipated to have significant impact since it has appeared in the Journal Science (Impact Factor 31.85). The launch of the Web Science Research Initiative (based on the ideas in this article) attracted significant world-wide attention in the scientific, technical and general media – endorsed by Google's CEO Eric Schmidt and presented in testimony by Tim Berners-Lee to the US Congress.
This research is one of the first papers to study the key challenge of bilateral bargaining when both buyers and sellers have additional bargaining opportunities if they fail to reach an agreement with their current opponent. The model extends the well-known Ultimatum game and leads to surprising and counter-intuitive results (in particular that , in contrast to the Ultimatum game, in environments with incomplete information the price-taker obtains the largest share of the surplus). In addition to theoretical results, the paper extends the state-of-the art in that it uses evolutionary algorithms to study bargaining settings with incomplete information which cannot be analysed game-theoretically. The same approach has been at the CWI in Amsterdam and at the University of Southampton in a number of subsequent papers and projects to study systems which are mathematically intractable.
This paper presents an innovative and practical way of performing multi-issue negotiations when the preferences of the opponent are not known. It is, therefore, an important contribution to applying bargaining in real-world settings. The techniques are then applied to the selling of information goods using a two-part tariff. This type of tariff has the advantage of producing win-win outcomes and also allows a seller to indirectly price discriminate among buyers with different preferences. The developed approach resulted in a system being developed in collaboration with ING bank to enable the automated negotiation of financial information between partners and branches of the bank.
This is one of the first papers to study auction mechanisms in the field of sponsored ads for online websites. At the time when the first results were published (a technical report with initial results appeared in 2001), Google had only just started using an auction-based approach for displaying banner ads. The innovative ideas presented in the paper also resulted in a patent pending (with intended coverage US, Europe and Japan) in collaboration with the Dutch national telephone company KNP. The techniques were developed in order to provide automated recommendations of products on the company's website.
This paper was presented at the most prestigious AI conference: Out of 1353 submitted papers, this was one of only 212 that were selected for oral presentation. Specifically, the paper is the first to study shill bidding and auction fees in a setting where auctioneers or sellers compete for buyers by setting appropriate auction parameters. The research has been the foundation for a new international trading agent competition, called TAC Market Design (also known as CAT, http://www.marketbasedcontrol.com/cat ), where markets rather than trading agents compete against each other. This competition will run for the first time at AAAI-07. The paper has also attracted media interest and has been the subject of radio interviews and press articles.
This paper reports, in a top-10 AI journal, the first solution to the problem of coevolutionary disengagement (identified by Watson and Pollack in 2001) and demonstrates its success on a classic real-world list-sorting problem. The results have since been generalised to maze-solving by Noble. Along with the conference papers that led to it, the paper has garnered 30 citations in total. The research was carried out as part of Cartlidge's PhD, supervised by Bullock.
This paper in the premier theoretical biology journal reports the first model to capture the physical constraints on termite mound construction. Previously, the leading model idealised termites and pheromones as able to pass freely through walls. Its author, Eric Bonabeau, described our new work (which derived from Ladley's prize-winning undergraduate project, supervised by Bullock) as “a welcome and, I believe, substantial addition to the field” in his review of the paper. It has led to a paper at an international conference on swarm algorithms published by Springer (ANTS2004) and an invited talk at the “Environment Construction” workshop at ALIFE X.
This paper in Adaptive Behavior (an interdisciplinary journal, rated second by impact factor within its subject category by ISI) is the first to systematically and independently explore the behaviour of a novel visualization tool within artifiical life. It led to a successful workshop on visualising the behaviour of evolutionary algorithms at the Eighth International Conference on Artificial Life (co-organised by Bullock), which in turn led to a special issue of the journal Artificial Life on visualization for complex adaptive systems (co-edited by Bullock). The research was carried out as part of Cartlidge's PhD, supervised by Bullock.
The paper was published as an invited "visualization gem" within a special issue of Artificial Life dedicated to visualization for complex adaptive systems. It serves as the definitive record of a body of work that represents the most well-developed and well-understood visualization technique to emerge within the field of artificial life (over 150 citations to papers on this technique since 1998).
This paper proposes a number of optimisations to the best well-known data structures for Boolean satisfiability (SAT) algorithms. Extensions to these data structures are currently in use by some of the best SAT solvers in existence, including JeruSAT and Eureka.
This paper contains the original ideas for using Boolean Satisfiability (SAT) techniques in the Binate Covering Problem and, indirectly, in Pseudo-Bolean Optimisation (PBO). These ideas are currently used by the most effective PBO solvers, including BSOLO and Pueblo, and have been extended in a number of ways by the authors of the more recent Pueblo PBO solver.
This paper unifies a number of different approaches for precise circuit delay computation using Boolean Satisfiability (SAT). The paper also proposes a number of novel algorithms for precise delay computation in the presence of accurate delays. These new algorithms and their effective integration with SAT solvers have contributed for shaping the area of precise delay computation in the presence of false paths.
This paper proposes an exact model for computing test patterns for faults in digital circuits. The proposed model is polynomial on the size of the digital circuit. This model is one of the first test pattern generation models which provides guarantees of optimality; in this case the size of the test pattern. Moreover, this work motivated additional work in the area of models for optimisation problems in testing digital systems.
This work proposes a new set of communication and mobility primitives in ambient-based systems and a sophisticated type system that address important safety concerns on both process mobility and communication. More precisely, using a combination of programming primitives (viz. a form of co-capabilities and credentials for access negotiation) and typing techniques (viz. the unique-thread-of-control property), it delivers a formal framework in which all legal systems are free from communication and mobility interferences. This outlet is a top refereed journal, and the paper appeared in short form in a top refereed conference (FST&TCS 2002). The work inspired followup papers and dissertations on information hiding in mobile ambients, role-based access control, and implementation of ambient calculi. According to Scholar, it has 68 citations (counting together journal and conference papers). Regarding the attribution of credit for personal contributions, I have conceived the main ideas behind the design of the calculus, and been a key contributor all along.
SOS rule format that guarantee that bisimilarity is a congruence have had enormous impact on the field, but sit at odd with structural axioms (e.g. associativity), which severely limits their applicability. This paper proves that `tile logic' systems can express SOS formats that accommodate structural axioms and yield dynamic bisimulation congruence (the largest bisimulation that is also a congruence), a precursor of contextual bisimulation that I introduced during my PhD. This outlet is a top refereed journal, and the paper appeared in short form in a top refereed conference (IFIP TCS 2000). The development here has been used mainly by the tile and rewrite logic community, in Italy and in the USA. Counting together this paper and its direct predecessor on dynamic bisimulation I count about 54 citations on Scholar.
This work culminates a long-running research (started with my PhD work) that has produced a series of papers in top refereed conferences. The first categorical model of Petri nets computations was presented in 1988, by Meseguer and Montanari. During my PhD I observed that it lacked the all-important property of functoriality; in 1994 I proposed a first solution based on a notion of Petri nets `strong' process. This paper builds on that work; it focusses on an alternative approach based on the idea of pre-nets, and formulates their operational, causal and categorical semantics in a way that settle the question completely and fully satisfactorily. Summing together the five main papers I have (co)authored that constitute the body of work culminated in this work, I count over 100 citations on Scholar. The work has provided inspiration for several PhD projects, mainly in Italy, Pisa, and Germany (Berlin). Regarding the attribution of credit for personal contributions, I have been a key driver of this research from its beginnings in 1990 that during the years has involved many researchers and a few PhD dissertations.
This work develops an analysis of space usage in the context of a calculus where locations have bounded capacities that processes consume, and where process activation and migration require space. This is a realistic scenario for several important applications involving code migration in constrained computational environments. The main result of the paper is the design of capacity types and related type systems to provide static guarantees that the intended capacity bounds will be preserved throughout the computation; i.e., both that the host environment will not starve the migrating process for resources and that, vice versa, the latter will not use more than the allocated (amount of) resources. This outlet is a top refereed journal; the paper is a much extended and extensively improved version of a refereed conference publication (ASIAN 2003) building also on ideas presented previously at a major conference (CONCUR 2002). Regarding the attribution of credit for personal contributions, I have conceived the initial ideas for calculus and types, and been a key contributor all along. It is early to assess the impact of this paper; its conference predecessors were relatively well received, with Scholar giving 46 citations.
This paper describes the experimental set-up and results for an evaluation of eight leading automated first-order logic theorem provers on a corpus of realistic software verification problems. This is the most comprehensive of the few application-oriented prover comparisons. A subset of the proof problems has been included in the TPTP, the standard corpus used for development, testing, and evaluation of first-order provers, making up more than 2% of the corpus. This has influenced the developers of many leading theorem provers.
This paper describes the design of the AutoBayes program synthesis system, as well as some application examples. It elaborates the concept of schema-based synthesis, which combines code template expansion with symbolic reasoning. This approach is the basis of ongoing research in program generation at the NASA Ames Research Center (funded by NASA) and at University of Southampton. In a separate, NASA-funded application project AutoBayes is currently also used for scientific data analysis.
This paper marks a significant advance in the understanding of operational semantics for higher-order concurrent languages. It uses the higher-order pi-calculus as a vehicle for demonstrating the feasibility of charactersing contextual equivalence in such a language without any restrictions to finite types. This is the first result of this nature and improves upon the long standing (10 year) state of the art in this topic defined by Sangiorgi. The techniques used here have subsequently inspired further analysis of higher-order pi-calculus and variants (Yongian Li, "Schmitt, Stefani", "Kobayashi, Sangiorgi, Sumii", "Hildebrandt, Godskesen, Bundgaard"), models of web based data ("Maffeis, Gardner"), models of aspects("Jagadeesan, Pitcher, Riely" , as well as denotational games-based models of higher-order langauges (Laird).
This is a study of the nature of behavioural equivalence in a calculus of distributed and mobile processes and formed part of the EU FP5 FET Myths and Mikado projects (costed at approx €1m and €3m) respectively. The techniques for obtaining full abstraction results here have subsequently enabled analagous results in further scenarios including distributed calculi with hierarchical structure (e.g. ambients), securely typed mobility (e.g. SafeDpi), and fault tolerant systems (Francalanza). The contents of this paper form a substantial portion of a new text book by Hennessy published by CUP - A distributed pi-calculus.
This work was conducted as part of the Myths and Mikado EU projects on Global Computing and uses dependent types to provide a sophisticated typed static analysis of program migration and resource control in distributed systems. A shorter version of this paper constituted a substantial deliverable to the highly rated Myths project. The analysis has informed developers of actual type-safe distributed programming languages such as Acute, Fraktal, and Lambda5 as well as further research in the area of access and mobility control (Bugliesi, Colazzo, Crafa). This research was also influential in allowing Yoshida to obtain further EPRSC funding (£250K) on an extension of this topic.
This is the first full-abstraction result for a calculus of concurrent, mutable, imperative objects and outlines a new approach for obtaining such results. The novel proof technique used here was directly employed in a number of papers for increasingly sophisticated object settings by Abraham, Bonsangue, De Boer,and Steffen, and was the focus of an entire session of FMCO 2004. The results here also enabled my co-author and myself to construct the first fully abstract trace model of a significant imperative core of Java (cf. Java Jr. : Fully abstract trace semantics for a Core Java Language, ESOP 2005).
This paper was published in one of the oldest and most prestigious journals in biology, with an impact factor of 3.5. It exemplifies the use of computational models to make contributions in biological science: our simulation provided a novel explanation for a previously mysterious behaviour in rats. The work attracted interest from laboratory and field biologists, with citations in journals such as Animal Behaviour and Applied Animal Behaviour Science. The paper has increased the profile of evolutionary simulation modelling in biology.
This paper was published in Cancer Research, with an impact factor of 7.6. It has had considerable impact in the field, with 69 citations (ISI database). My contribution was to show how neural networks could advance the state of the art in renal cancer diagnosis. The work helped to bring valuable machine learning methods into the field of spectroscopic analysis for cancer research. It was done in collaboration with the Cancer Research UK Clinical Centre at St. James's University Hospital in Leeds.
This is one of two papers co-authored with my former PhD student Daniel Franks, and published in one of the oldest and most prestigious journals in biology (impact factor 3.5). The paper raised awareness that, when modelling predator prey interactions, predation rates on prey are not necessarily a suitable surrogate for evolutionary fitness. Since these papers there has been an increased use of evolutionary simulation modelling in studying warning signals and mimicry. Dan has since gone on to a Research Councils UK fellowship and a promising career.
This is one of two papers co-authored with my former PhD student Daniel Franks, and published in one of the oldest and most prestigious journals in biology (impact factor 3.5). The paper is an example of the use of computational models to make theoretical advances in biology. Ruxton et. al. (2004) in "Avoiding Attack", the authoritative book on anti-predator defences, described the work as "important because it marks a shift in emphasis away from a simple dichotomous view of two unpalatable prey and a single predator to a more realistic community perspective."
The paper brought together many earlier results on adhesive categories and deriving bisimulation congruences in order to show how to derive labels for a class of models which includes variants of Milner's bigraphs. LiCS is one of the most prestigious conferences in the field with a very low acceptance rate. The paper has been influential in part because it showed that Ehrig and Konig's borrowed contexts are instances of the authors' 2-categorical theory of deriving bisimulation congruences.
An expanded version of an initial workshop paper entitled "Deriving process congruences: a 2-categorical approach". The use of 2-categories has enabled the clarification and solution of several problems encountered by Robin Milner and James Leifer in their study of the derivation of labelled transition systems from starting from a reduction semantics. This paper was the initial contribution which led to a research project which embodies the other papers listed here is continued through Sobocinski's EPSRC postdoc fellowship.
The paper is the full journal version of a conference paper entitled "Adhesive Categories" published in the proceedings of FoSSaCS '04 and introduces the notions of adhesive and quasiadhesive categories. This research contribution has been influential in the fields of graph transformation and concurrency theory: indeed the two papers already have 40 cumulative citations on Google scholar and adhesive categories form an integral part of an Italian government-funded research project involving the Universities of Pisa, Udine, Venice and Insubria.
Expanded on the use of 2-categories in the derivation of operational semantics of calculi for concurrency and showed that the technique encompasses the precategories used by Robin Milner. This is the journal version of FoSSaCS paper which was recognized as the best theoretical paper at ETAPS '03 by the European Association for Theoretical Computer Science. While essentially a technical contribution, it clarified the expressivity of the two approaches and is often cited (together with the conference version, over 40 citations according to Google scholar).
This is the first paper to address the revenue maximization problem in single-item auctions where the bidders probability distributions are discrete rather than continuous. We give a closed-form solution for the case where the bidders' distributions are known, and an efficient learning-based algorithm for the practically important case where there is only partial information about the distributions. The discrete case is a more realistic model for many practical applications; its analysis enables us to design real-life auctions with provable guarantees on performance and rational bidders' behavior.
The paper shows that the main result of a widely cited paper by Kearns, Littman and Singh from NIPS 2001 is incorrect. The KLS paper proposed an algorithm for finding a Nash equilibrium in graphical games on trees; we show that the output of this algorithm is not necessarily a Nash equilibrium. We show how to fix the KLS algorithm: our algorithm is the only polynomial-time algorithm for finding Nash equilibria in an infinite class of graphical games. A follow-up paper has been accepted to ACM EC'07. Also, our paper has been studied in a graduate seminar at Tel Aviv University.
The paper introduces a new technique for proving lower bounds on total payment in set system, or "hire-a-team", auctions. It focuses on a special case of this scenario -- path auctions -- and proves a tight lower bound on payments as well as designs an optimal mechanism for this problem. The paper stimulated interest in set system auctions. Its techniques were subsequently used by us and several other groups (Karger et al. at MIT, Karlin et al. at U. of Washington, Ronen et al. at Technion) to analyze other subclasses of sets system auctions. The paper was studied in graduate classes or seminars on Algorithmic Game Theory at Stanford, Berkeley, MIT, NYU, University of Chicago, University of Miami, Technion (Israel) and Hebrew University of Jerusalem (not including presentations given by the authors).
The frugality ratio of a procurement auction is the measure of the center's overpayment relative to "fair" cost. This concept was introduced by Archer and Tardos in 2002 and widely studied since then. In this paper, we give the first tight bound on frugality ratio of a mechanism for an NP-hard problem, namely, vertex cover. The paper has been invited to the special issue of Games and Economic Behavior (the most prestigious game theory journal) devoted to the best papers of ACM EC'07.
Sunderland was jointly supervised by myself and Damper, as part of his work the rapid prototyping of robotic models was required. The paper discussed an XML programming approach to describing a robot or other physical system is discussed for use with a commercial full physics simulator. The approach was further enhanced by the integration of MATLAB to give a fully functioning robotic simulator – this approached permitted rapid prototyping of both the robot and its simulation environment, which significantly reduced the development time of the robotic models, hence extending the range of models and parameters that were studied during his PhD.
Dominguez-Lopez was supervised by myself, Damper and Harris. The paper discusses the autonomous control of a robot gripper developed previously within the School using a neurofuzzy approach. The hand incorporated slip and force sensors from my previous research. We demonstrated that a hybrid system combining a supervised learning network with GARIC (Generalised Approximate Reasoning based Intelligent Control) gave significant benefits, in controlling a multi-fingered gripper, where little or no knowledge of the load is available. These conclusions were supported by simulations and experimental work. Aspects of the work have been taken forward by Dominguez-Lopez following his return to Mexico, leading to further joint publications.
Based on previous work on advanced robotics I was invited to provide a perspective on a novel tactile sensor (Maheshwari and Sarel, University of Nebraska) to the medicine and robotics. The perspective concluded that while in an early stage of development the use of this sensor could have a significant impact on the design and control of multi-fingered robotic end effectors, such as the Whole Arm Manipulator project where I was the principle investigator. The publication of the perspective led to considerable interest in the media, leading to interviews with BBC World Service, Radio Manchester, New Scientist and The Times.
Dubey developed this novel photoelastic based robotic sensor during his research into robotic end effectors. The sensor was both simple and easy to install in a finger tip, and allows the detection of both the applied force and slip from a single sensing unit, the discrimination is undertaken in signal processing. The simple and robust sensor, is optically based it is largely unaffected by external disturbances. Dubey currently hold a Lecturing post at the University of Bournemouth, where he is continuing to exploit this technology for robotic and prosthetic applications.
Gait as a biometric has seen increasing interest since we pioneered it in the mid 90's. This paper concerns the most sophisticated model basis, which accumulates a frequency-based description of thigh inclination direct from image data. This study derived from early work in gait biometrics and has sparked much interest. (There are over 60 journal citations already to Cunado's work [Science Citation Index] and over 200 citations for Cunado's PhD work in total [Google Scholar]). It was part of the work that led to our participation in the DARPA funded Human ID project, and our appearance on ABC News (twice).
This paper reflects the combination of our new approaches to moving-shape extraction, with our new approaches to arbitrary shape description and extraction. It is applied in spinal analysis, easily handling the low signal to noise ratio and occlusion in digital videofluoroscopic images. Interestingly, it also features an early application of phase congruency for low-level feature extraction. We believe that future interest in spinal image analysis will use motion and that this paper will constitute a pioneering approach to extracting and describing moving replicated shapes. This is reflected in the recent citations to this paper.
This is an (invited) review paper showing the current progress in gait biometrics. It reflects the current interest in gait as well as our leading international position. For gait, recognition can be achieved outdoors and at low resolution (tasks hardly attempted by many other biometrics). These advances have been demonstrated in many top conferences and journals. The Southampton gait database is used internationally. Public interest in our research is manifest by extensive media interest, including a leading article in The Times. Research support included a 1M$ grant by DARPA on the Human ID at a Distance program.
This paper concerns joint interest by geography in translating computer vision techniques to understand (automatically) remotely sensed images. The imagery is low resolution and here we showed how we could use context to infer composition, at a sub pixel scale. It was the first work of its type in remote sensing and it sparked quite an interest: there are over 50 journal citations to Tatem's PhD work in the Science Citation Index and nearly 100 in Google Scholar. His work featured in the top conferences in this area as well as this, which is the top international journal here.
The paper describes a complete system for the hugely difficult task of off-line Chinese character recognition, based on active shape modelling of radicals. The system achieves the best known performance (93% characters correct) on a very large publicly accessible database (HITPU, 430,800 Chinese characters).
In this paper, we adopt a consistent Bayesian framework allowing for the integration of prior knowledge of how people walk as well as updating of walker models as more information becomes available. Previous works have often used Bayesian techniques but not used them consistently, coupling them with ad hoc component parts. We demonstrate the performance gains that can be obtained by having a consistent framework relative to the HumanID gait challenge benchmark. Bayesian updating allows us to bootstrap from training on clean video data recorded in our gait lab to performing on noisy outdoor images, a task which other researchers have barely begun to address.
This paper documents the best performance yet achieved on the difficult task of automatic pronunciation modelling, important in speech technology and speech synthesis. This is achieved by novel information fusion techniques. The performance of this system forms the baseline (which participants will try to surpass) for the recently-announced PASCAL Challenge in letter-to-sound conversion. PASCAL is a large EU Network of Excellence (57 universities, 500 researchers) coordinated by Southampton.
Building on our work on pronunciation by analogy (PbA) of unknown words for text-to-speech (TTS) applications, this paper extends the problem to the word class of proper names and shows that PbA is superior to other methods tested on this important problem. Although in press at present, our approach should become the method of choice for future TTS systems.
This paper, first to describe the use of Gausssian attractors in image based feature extraction, is novel in representing the shape of an object as a small number of points, dependant on the pixel distribution, but invariant to illumination and scale and robust in the presence of noise. The potency of this representation is demonstrated by its application as a descriptor for human ears. This in itself was novel, as at the time no practical implementation of ear recognition as a biometric had been reported. The algorithm, cited by most papers in this specialist field, is currently the reference.
An invited paper to a special issue resulting from the 3rd International Conference on Audio and Video based Biometric Person Authentication; this work pioneers the use of the generalised symmetry operator as a description of human shape and motion. This work forms the basis for the subsequent development of novel spatio-temporal symmetry which achieved close to 100% correct classification rate when applied to internationally recognised standard test sets. Developed during the 1st year of the DARPA Human ID at a Distance project, this work was instrumental in that project's success and attracting subsequent funding for Biometric research.
This paper describes a biologically plausible model for Automatic Gait Recognition. This was the first reported development of such a model and the first of any form to be applied to both running and walking in a unified manner. Algorithmically the model gains its strength from considering all available data, not just individual images. Although not predictive, the ability to incorporate both running and walking as a single measure in a single model has advantages. These results have led directly to the continuing development of model-based pose invariant biometrics in phase II of the DIF DTC.
The potential of the Force Field Transformation as a descriptor for human ears was demonstrated when the technique was applied to the images in Surrey Universities XM2VTS Face and Head database achieving a 99% correct recognition rate. The success of this algorithm was a significant contributory factor in obtaining funding as part of the initial phase of the DIF DTC. The project, concluded at the end of 2006, resulted in the construction of a multi-modal biometric portal which for the first time fused human gait, face and ears into a unified framework. In trials the system achieves 100% recognition rates.
This paper describes the work of two of the leading Adaptive Hypertext Research Labs to make their systems inter-operate. The contribution is that, although the work is shown to be technically feasible, it is also shown not to be meaningful to users. Thus the significance of this paper is that it drew the line under a long thread of research on interoperation between open HT systems. This international collaboration was organised by Davis, and he supervised the work conducted by the Eindhoven student and an ex PhD student of his.
This work reported on three years worth of evaluation and provided empirical evidence for the use of Hypertext advisors in e-learning. An important contribution was the recognition of the importance of dialog between the student and the advisor. Some hundreds of medical students from Southampton have used this system as part of their degree. Davis managed the team that produced and adapted the hypertext system for this research, and wrote the paper.
The Dialog Plus toolkit has made an important impact in the area of pedagogical planning, and there are 200 teachers from 89 institutions in the UK, USA, mainland Europe, Australia and China registered to use the system (as well as many more who have registered as Guest to evaluate the system. This paper reports work on relating the output of this toolkit to the increasingly important IMS Learning Design. This is important as it unites the technical and pedagogical approaches to learning design. The toolkit is currently in use at Warwick and Southampton as part of CPD for teachers new to eLearning, and is being used in the JISC funded Edit4L project. Davis is Pricipal Investigator on the project that produced the toolkit and supervised the work linking it with IMS LD.
Social tagging is a fairly recent phenomenon. The contribution of this paper is that it is one of the first to publish evidence that valuable structured metadata can be extracted from the unstructured clutter that arises from freeform tagging. This paper summarizes some of the work that has appeared in 8 related conference papers, including the Best Paper at the Second International IEEE Conference on Innovations in Information Technology. Davis supervises this PhD work.
This is a complete rewrite of a paper published in 1982, which received some acclaim, not least in the functional programming community and at MIT, where Sussman and others used it extensively, not least in VLSI design. The example in both papers is a complex woodcut of Maurits Escher, of interlocking fish of various sizes. In the original paper the fishes were decomposed into square tiles, which is not the way that the artist had originally conceived them. In this paper the right decomposition is described, where the fish are drawn whole on triangular tiles. All the fish are identical, except for size and rotation. Advances in graphics knowledge have made this simpler decomposition possible. The paper introduces the idea that pictures can be described declaratively, and that this is considerably simpler than giving instructions on how to draw them. The complexity of the Escher picture contibutes to a convincing claim. The paper concludes with some algebraic properties of pictures of the sort that Escher made famous. Google Scholar gives the original and revised papers 48 citations.
We bring together two aspects of Software Engineering that we show work well together: components and formal reasoning. This paper was influential in us getting the Reasoning about Inconsistency (RICES) grant from EPSRC (£0.5M). This in turn was very influential in us getting the first of the Open Middleware Infrastructre (OMII) grants from EPSRC (£6.5M) since we had shown that we could deal with large systems in a systematic and rigorous way. We subsequently got a further £5.5M from EPSRC and 4.7M euro from the EU to sustain the OMII into the next decade.
This work had actually been carried out as part of the Reasoning about Inconsistency (RICES) project but took a number of years to appear, having been published initially at COMPSAC 2003 and then selected as a best paper for inclusion in this special issue of JSS. The paper expands upon Axelrod's Evolution of Cooperation by studying survival behaviour in a community of negotiators trying to maximise sales of car-hire-days. The example was given to us by our industrial collaborator (ICL Fujitsu) and was developed further by them as a promotional tool.
In this paper we introduced a semi-formal notation for describing workflow in distributed applications, specifically intended for scientific applications on the Grid. The architecture of the solutions developed using this notation has informed the interoperability research we have done in The OMII-Europe project and was a significant factor in qualifying us to lead that research. The choice of reusable web services for OMII-Europe was made on the basis of our workflow studies. OMII-Europe is now considered one of the leading EU projects on Grid interoperability, tackling as it does the very real problem of making long-running applications able to cooperate even though their base infrastructures are very different. Reusable Web Services, based on Open Standards, was the key that we chose.
This is the paper that launched the worldwide open-access self-archiving initiative and led directly to the UK Select Committee recommendation to mandate self-archiving for all RCUK-funded research output: http://www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/399/39902.htm. It also influenced the RAE decision to become exclusively metric after 2008 and is helping to shape European and US self-archiving policy of both research funders (ERC, EURAB) and universities. Our later scientometric studies demonstrated how self-archiving research free for all on the web dramatically increases both its usage and its citation impact. Citations: Scholar: 85; ISI: 26; Google Books: 24; Google URLs searching on (harnad self-archiving): 116,000
Since we reopened the topic of language evolution (Harnad et al. 1976 after 100-year moratorium), its most important unanswered question has been: “What caused those rapid, dramatic changes in the human brain hundreds of thousands years ago?” Our neural net and artificial life simulations reveal that powerful adaptive advantage of language: The capacity to acquire categories by hearsay instead of costly and risky direct experience, as all other species do. This finding has been influential in the subsequent course of research on both language evolution and category acquisition. Citations Google Scholar: 140 (harnad language evolution): Google Books: 158; Google: 96,300