INF3490/4490 Biologically inspired computing Summary & Questions Weria and Kai
I NF3490/ 4490 Exam • Format: Written/Digital (see small example at uio.inspera.no) • When: November 30, at 09:00 (4 hours) • “ Closed book exam ” : No materials are permitted on the exam • Location: See StudentWeb and http://www.uio.no/studier/emner/matnat/ifi/INF3490 /h18/eksamen/index.html • http://www.uio.no/studier/emner/matnat/ifi/INF4490 /h18/eksamen/index.html • Same exam in INF4490 as in INF3490
Multiple-choice Questions on Parts of the Exam INF3490/4490 — Biologically Inspired Computing November 30th, 2017 Exam hours: 09:00 – 13:00 Permitted materials: None The course teachers will visit the exam room at least once during the exam. The exam text consists of problems 1-40 (multiple choice questions) to be answered by selecting true or false for each statement. If you think a statement could be either true or false, consider the most likely use/case. Problems 41-43 are answered by entering text. Problems 1-40 have a total weight of 80%, while problems 41-43 have a weight of 20%. Scoring in multiple choice questions Each problem has a variable number of true statements, but there is always at least one true and one false statement for each problem. If you think a statement could be either true or false, consider the most likely use/case. 0.5 point is given for each correctly marked statement. Further, an incorrectly marked statement or an unmarked statement(s) results in 0 point. The maximum score for a question is 2 points and the minimum is 0. It will be compensated through grade thresholds adjustments for the lack of negative points (to adjust for the opportunity to 3 get a positive score by random answering).
Most likely use/case • If you think a statement could be either true or false, consider the most likely use/case • Example: “Evolutionary algorithms maintain a population of candidate solutions” – May be False for certain specific EAs – However, main focus in our class has been on EAs with a population 4
Multiple-choice Questions in Digital Exam 5
Digital exam: Text reply questions 6
We prefer answers on text problems in English language but we would naturally not reduce the score due to spelling errors as long as the understanding seems to be correct like e.g. • ” the bias node multiplied with it's respectful weights is used to calculate the activation function in the first hidden layer.” • Answer briefly and with structured and formatted text. 7
Example: Ethical Recommendations for Robots Structure and formatting • With: • Without 8
INF3490/INF4490 Syllabus: • Selected parts of the following books (details on course web page): – A.E. Eiben and J.E. Smith: Introduction to Evolutionary Computing, Second Edition (ISBN 978-3-662-44873-1). Springer. – S. Marsland: Machine learning: An Algorithmic Perspective. ISBN: 978-1466583283 – On-line papers (on the course web page). • The lecture notes. 2018.11.21 9
Supporting literature in Norwegian (not syllabus) Jim Tørresen: hva er KUNSTIG INTELLIGENS Universitetsforlaget Nov 2013, ISBN: 9788215020211 Topics: • Kunstig intelligens og intelligente systemer • Problemløsning med kunstig intelligens • Evolusjon, utvikling og læring • Sansing og oppfatning • Bevegelse og robotikk • Hvor intelligente kan og bør maskiner bli? 10
Brief Summary OPTIMIZATION AND SEARCH 11
Search Landscapes 13
Some Optimization Methods 1. Exhaustive search 2. Greedy search and hill climbing 3. Simulated annealing 4. Gradient descent/ascent – Not applicable for discrete optimization
Exploitation and Exploration • Search methods should combine: – Trying completely new solutions (like in exhaustive search) => Exploration – Trying to improve the current best solution by local search => Exploitation 15
16
17
Brief Summary EVOLUTIONARY ALGORITHMS 18
The Problem with Hillclimbing
General scheme of EAs Parent selection Parents Intialization Recombination (crossover) Population Mutation Termination Offspring Survivor selection 20
Genotype vs phenotype Genotype Phenotype Locus Loci 21
Representation and variation operators • First stage of building an EA and most difficult one: choose right representation for the problem • Type of variation operators needed depends on chosen representation • Representations we have seen: – Binary strings – Integers – Floating-point numbers – Permutations – Trees 22
23
24
Selection in EAs • Selection can occur in two places: – Parent selection (selects mating pairs) – Survivor selection (replaces population) • Selection works on the population -> selection operators are representation- independent ! • Selection pressure : As selection pressure increases, fitter solutions are more likely to survive, or be chosen as parents 25
Effect of Selection Pressure • Low Pressure • High Pressure 26
Selection • Parent selection: – Fitness Proportionate Selection – Rank-based Selection – Tournament Selection – Uniform Selection • Survivor selection: – Elitism – ( µ , λ )-selection – ( µ + λ )-selection 27
Summary: The standard EA variants Name Representation Crossover Mutation Parent Survivor Specialty selection selection Genetic Usually fixed-length Algorithm Any or none Any Any Any None vector Evolution Discrete or Strategy Strategies Real-valued vector intermediate Gaussian Random draw Best N parameters recombination Evolutionary Strategy Programming Real-valued vector None Gaussian One child each Tournament parameters Genetic Replace Usually fitness Generational Programming Tree Swap sub-tree None sub-tree proportional replacement 28
Performance Measures • Performance measures (off-line) – Efficiency (alg. speed, also called performance) • Execution time • Average no. of evaluations to solution (AES, i.e., number of generated points in the search space) – Effectiveness (solution quality, also called accuracy) • Success rate (SR): % of runs finding a solution • Mean best fitness at termination (MBF) • “Working” measures (on-line) – Population distribution (genotypic) – Fitness distribution (phenotypic) – Improvements per time unit or per genetic operator 29 – …
30
31
Hybrid EAs 32
Multi-Objective Evolutionary Alogrithms • Find a set of non-dominated solutions ( approximation set ) following the criteria of: – convergence (as close as possible to the Pareto- optimal front), – diversity (spread, distribution) 33
Multi-Ojective EAs: Requirements 1. Way of assigning fitness and selecting individuals , – usually based on dominance 2. Preservation of a diverse set of points – similarities to multi-modal problems 3. Remembering all the non-dominated points you have seen – usually using elitism or an archive 34
35
36
Brief Summary MACHINE LEARNING 37
Characteristics of ML • Learning from examples to analyze new data • Generalization: Provide sensible outputs for inputs not encountered during training • Iterative learning process • Types: – Supervised Learning – Reinforcement Learning – Unsupervised Learning 38
Supervised learning • Training data provided as pairs: { } ( ) ( ) ( ) ( ) ( ) ( ) x f x , , x f x , ,..., x , f x 1 1 2 2 P P • The goal is to predict an “output” y from an “input x ”: ( ) = y f x • Output y for each input x is the “supervision” that is given to the learning algorithm. – Often obtained by manual annotation – Can be costly to do • Most common examples – Classification – Regression
Neural Networks: McCulloch and Pitts Neurons Terminal Branches Dendrites of Axon x1 w1 x2 w2 x3 w3 Σ Axon wn xn • Greatly simplified biological neurons. • Sum the weighted inputs • If total is greater than some threshold, neuron “fires” • Otherwise does not 40
The Perceptron Network Inputs Outputs 41
Training a perceptron inputs x 1 weights w 1 Σ w 2 activation y x 2 a= Σ i=1 n w i x i q . output . 1 if a ≥ q { y = 0 if a < q . w n x n 42
What Can Perceptrons Represent? 1,1 1,1 0,1 0,1 0,0 1,0 1,0 0,0 AND XOR • Only linearly separable functions can be represented by a perceptron 2018.11.21 43
44
45
Solution for XOR : Add a Hidden Layer !! Minsky & Papert (1969) offered solution to XOR problem by combining perceptron unit responses using a second layer of units . +1 1 3 2 +1 46
Backpropagation Rumelhart, Hinton and Williams (1986) y j δ j Backward step: propagate errors from w jk output to hidden layer x k δ k w ki Forward step: Propagate activation x i from input to output layer
48
The Solution: Cross-Validation To maximize generalization and avoid overfitting, split data into three sets: • Training set : Train the model. • Validation set : Judge the model’s generalization ability during training. • Test set : Judge the model’s generalization ability after training. 49
50
Recommend
More recommend