ETHICAL HACKING CIS 6930
Popular in Course
Popular in Comm Sciences and Disorders
This 18 page Class Notes was uploaded by Aliyah Boyer on Friday September 18, 2015. The Class Notes belongs to CIS 6930 at University of Florida taught by Staff in Fall. Since its upload, it has received 15 views. For similar materials see /class/207036/cis-6930-university-of-florida in Comm Sciences and Disorders at University of Florida.
Reviews for ETHICAL HACKING
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 09/18/15
Paper Review Joint reviews allowed All will get same grade But each should be prepared to answer questions about the paper in a popup quiz fashion I Summary 2 paragraphs about 5 lines each 30 points Summarize the problem that the paper is addressing the criticismshortcomings that the paper provides for previousrelated work the approachsolution that the paper provides to overcome the previous shortcomings and the main conclusionsresults of the paper Use your own words to describe the summary ie do not copy You may use ideas from the abstract and conclusion in addition to your own understanding from reading the paper carefully I Strengths and weaknesses of the paper at least 2 strengths and 2 weaknesses Write about 2 lines for each 30 points l5l5 Provide your own view about the paper s approach was it a good one in which ways was it a bad approach in which ways Is this paper a major or marginal contribution Why do you think this is a major or marginal contribution to the field I Points and suggestions of improvement suggest 3 ideas at least Write about 3 lines for each 30 points Provide your own input to address the problems of the paper and suggest ways in which the paper could be improved If you were to redo this study yourself which points would you improve upon and how I Examlike question not to exceed 4 lines and its answer not to exceed 10 lines 10 points 5 5 7 Assessment of the review Based on originality our own words and arguments not copying strength of argument accuracy identifying the major points clarity Summary 30 strength and weaknesses 30 improvement suggestions 30 examlike question 10 note to help bookkeeping please number the reviews that you submit from 1 to 10 This will help youme keep track of your reviews Qlou should have 10 reviews by the end of semester There will be available space in the review template to enter your review number Submit your reviews as hardcopy during the lecture Also please keep an electronic copy for reference Do not email your reviews unless explicitly requested to do so Thanks Please follow the template provided on the web site for your reviews Lecture on PCP theorem and approximability Geometric Complexity and Applications C15 6930 September 926 2008 Lecturer Venkatakrishnan Ramaswamy Scribe Venkatakrishnan Ramaswamy 1 Introduction In the rst part of these notes we study probabilistically checkable proof systems PCPs and look at the celebrated PCP theorem which provides a probabilis tic characterization of the class NPl We use the PCP theorem to prove some hardness of approximation results In the second part we describe a type of reduction called LTeductz39ons which unlike the manyone polynomialtime reductions used in the theory of NPcompleteness preserve approximation ratiosl We use these reductions in developing a complexity theory of approximability and show the existence of a large class of problems that do not have a PTAS unless PNPi The material in the rst part and the second part follows respectively the material in l and The reader will frequently be referred to these books for details that are not central to the exposition here 2 The PCP Theorem The PCP Theorem provides a powerful probabilistic characterization of the class NPl In this section we will discuss probabilistic proof veri ers state the PCP theorem prove the easy direction of the inclusion and present some intuitionl We then use the PCP theorem to prove a hardness of approximation result for MAX kFUNCTION SATl This result will be used in the next section to show that no PTAS exist for a large class of problems unless PNPi Recall that NP is usually de ned to be the class of decision problems whose Yes instances have short polynomial in the length of the input witnesses that can be veri ed in polynomial time lnformally the PCP theorem states that for these problems in fact there exist witnesses which can be veri ed with high probability by only looking at a constant number of randomly chosen bits The witness string is often called a proof in the literature We will use these terms synonymouslyl A probabilistically checkable proof system is described by two parameters namely the number of random bits it needs and the number of bits of the proof it queries The PCP theorem uses Olog n number of random bits and a constant number of query bits The ven39 er for the PCP is a polynomialtime Turing machine which apart from its input tape has a tape that provides random bits and another tape that has the proof on it The proof tape is random access ie any bit of the proof on PCP theorem and approximabilityl can be read in constant time Clearly which bits of the proof are queried is then a function of the input and the random bits The turing machine either halts and accepts the proof or halts and rejects the proof De nition 1 A language L E PCPrnqn if there is a veri er V and constants c and k such that on input I V obtains a random string of length crn and queries hqn bits of the proof Further 0 fr 6 L then there is a proofy that makes V accept with probability 1 0 fr L then for every proof y V accepts with probability lt where the probabilities are over the random string The probability of accepting in case z L is called the error probability lntuitively a PCP veri er is analogous to a lazy but competent TA who is grading proofs written by students Since the TA is lazy she only checks a small part of each studentls proof and based on the correctness of that small part decides if the proof is right or wrong Since the TA is competent if the entire proof is right she will never fail to be convinced by any part of it However if the proof is incorrect it might be the case that she happened to look at a correct portion of the proof and erroneously concluded that the entire proof is right The error probability quanti es the chance of this happening This is the intuitive reason for the onesided error in the de nition above It is immediate from the de nition that NPPCP0p0lyni This is be cause a deterministic veri er requires zero random bits and queries atmost a polynomial number of bits of the proof and has error probability zero which is smaller than 12 The PCP Theorem offers another characterization of NP Theorem 2 PCP Theorem NPPCPlogn1 The inclusion PCPlog n 1 Q NP is easy to prove Consider a language L E PCPlog n 1 It thus has a PCP veri eri The claim is that using this PCP veri er we can construct a deterministic veri eri Both these veri ers receive the input instance and a proof and seek to verify the proof for the input instance Additionally the PCP veri er uses logn random bits and gives answers that are true in probability over this random stringi What we do is to simply run the PCP veri er over every possible random bit string and accept if and only if the PCP veri er accepts for every random string while keeping the input instance and the proof the same Since the random string is of length logn and the PCP veri er runs in polynomial time the resulting deterministic veri er also runs in polynomial time The other inclusion NP Q PCPlog n l is much harder to prove and will not be discussed in these notes The interested reader is referred to 3 for details Next we describe a weaker probabilistic veri er than the PCP veri er for SSAT which should give the reader a sense of how strong the PCP Theorem isi Consider a 3CNF SAT boolean formula with m clausesi 3SAT is in the class NPi Suppose we are given a certi cate which is an assignment to the variables and Olog n bits The weak veri er does the following lt randomly select one of the clausesi Since the number of clauses is atmost the number of variables logn random bits are suf cient to make this choice It now queries the truth values of the variables present in the clause chosen from the proof the query size is therefore atmost 3 bits lt substitutes the truth values from on PCP theorem and approximability2 the proof onto the clause and checks if the clause evaluates to true If this is so it halts and accepts else it halts and rejects If the formula is satis able and the certi cate is right there is atleast one such certi cate then this veri er will accept with probability 1 since an assignment that satis es the formula must make each clause evaluate to true However if the formula is unsatis able then each certi cate makes atleast one clause evaluate to false We choose one such clause with probability atleast lmi Therefore our error probability for unsatis able instances for this weaker veri er is atmost 17 The breakthrough with the PCP theorem is that it manages to get this error probability down to atmost 12 while still querying a constant number of bits from the proof and using Olog n random bitsi Next we describe the MAX hFUNCTION SAT problem and prove a hard ness of approximation result for it using the PCP Theoremi Problem 1 MAX hFUNCTION SAT Givenn Boolean variables 11 i i and m functions f1 i i i fm each of which is a function of h of the boolean vari ables nd a truth assignment to 11 i i i In that maximizes the number of func tions satis ed Here Is is a xed constant not part of input Lemma 3 There exists a constant k for which there is a polynomialtime reduc tion from SAT to MAX hFUNCTION SAT that transforms a boolean formula 45 to an instance I of MAX hFUNCTION SAT such that o fab is satis able OPTU m and o fab is not satis able then OPTU lt m Sketch of Proof Since SATE NP it has a PCPlog n 1 veri er Pi Since the random bit string used by P has length atmost clog n we can go over all possible random bit strings of which there are atmost nci On being presented a random bitstring and 45 P queries atmost 4 bits of the proof The instance I of MAXhFUNCTION SAT is constructed as follows It has number of variables equal to the length of the proof which can be atmost 4n Also we have It 4 The idea is that for every random bit string since 45 is xed whether V accepts or rejects is a boolean function ofjust the bits of the proof queriedi We construct this function Since the number of bits 4 is a constant we can run through all the 2 1 bitstrings to construct the function in constant time We thus have a function corresponding to every random bitstringi If 45 is satis able there exists a proof that makes the veri er accept for every random bitstringi Therefore there is an assignment to the variables of I that makes each of the functions evaluate to true If 45 is not satis able every proof leads to the veri er accepting on atmost half of the bit stringsi That is for every assignment to variables of 1 less than half the functions evaluate to true I 3 Approximation and Complexity In this section we rst describe a type of reduction called Lreductions which unlike the manyone polynomialtime reductions used in the theory of NP completeness preserve approximation ratiosi We then de ne a complexity class for optimization problems called MAXSNP which is motivated by a structural on PCP theorem and approximabilityS characterization of NP from nite model theory Faginls theoremi We list sev eral problems that are known to be MAXSNPcomplete under Lreductionsi We then show that every problem in this class has approximation threshold strictly greater than zero Finally we show that none of the MAXSNPcomplete problems have a PTAS unless PNP which is the strongest result that we could expect to hold 31 LReductions We observe that the manyone polynomial time reductions do not preserve ap proximation ratios of approximation algorithms We thus construct another type of polynomialtime reduction for optimization problems which preserve ap proximation ratios of approximation algorithms across the reduction and which are transitive iiei If problem A reduced to problem B and problem B reduced to Problem C then there exists a reduction from A to C with the same properties De nition 4 L h quot An I D J 1quot from an 1 quot 39 quot problem A to B is a pair offunctions R 5 both computable in logarithmic space such that is an instance of A then 131 is an instance of B and ify is an answer feasible solution of Rz then 5y is an answer of 1 Furthermore 1 If I is an instance ofA with optimum cost OPTI then 131 is an in stance of B with optimum cost satisfying 0PTRz a 013MB where a is a positive constant w Ifs is any feasible solution of Rz then 5s is a feasible solution ofz suc t a lOPTI C58l S B lOPTRI CSM where B is another positive constant particular to the reduction In both cases is used to denote the cost of offeasible solutions Observe that from the second property we have that if s is the optimum solution of Rz then 5s is the optimum solution of 1 The following two propositions are an easy consequence of the de nition Proposition 5 If 133 is an Lreduction from problem A to problem B and Q is an Lreduction from B to C then their composition R R Squot S is an Lreduction from A to E 0 Proposition 6 If there is an Lreduction RS from A to B with constants a and and there is a polynomialtime eapproximation algorithm for B then there is a polynomialtime Tb3approximation algorithm for A The last proposition gives rise to the following corollary Corollary 7 If there is an Lreduction from A to B and there is a PTAS for B then there is a PTAS for A The proofs are left as an exercise and are also available in on PCP theorem and approximability4 32 The Class MAXSNP Fagin s Theorem from nitemodel theory 2 provides a structural characteri zation of the class NPi It states that all graphtheoretic properties in NP can be expressed in existential secondorder logici Strict NP or SNP is a subset of NP which consists of properties expressible as HSVlexg i i i kaq 5 Gx1 i i i xk where 45 is a quanti erfree FirstOrder expression involving the variables xi and the structures G the input and Si SNP contains decision problems however we are interested in de ning a class of optimization problems We can achieve this by slightly modifying the de nition SNP consists of structures G for which 35Vx1Vx2 i i i kaqb is true Now we can relax this requirement a little bit lnstead of requiring 45 to hold for all h tuples we only ask for the S that makes it hold for the largest possible number of tuples This immediately gives us an optimization problemi We now de ne the class MAXSNPO of optimization problems Problem A in this class is de ned in terms of the expression maxs x1uixk E Vk G1HiGmSx1ink The input to a problem A is a set of relations G1HiGm over a nite universe Vi We seek a relation 5 Q VT such that the number of h tuples x1i i i xk for which 45 holds is as large as possible Finally we de ne the class MAXSNP to be the class of all optimization problems that are Lreducible to a problem in MAXSNPOi Example 1 The problem MAXCUT is in MAXSNPO and therefore in MAXSNP This is because MAXCUT can be written as follows maxsgvlxyy10ryyV CuID A 506 A e5yl Here the graph is represented as a directed graph The problem asks for the subset S of nodes that maximize the number of edges that enter 5 or leave 5 that is the maximum cut in the underlying directed graph The following theorem shows that every problem in MAXSNP has nonzero approximation thresholdi Theorem 8 Let A be a problem in MAXSNPO Suppose that A has the form maxs x1x2i i i xk Then A has a 172 k approximation algorithm where k is the number of atomic expressions in 45 that involve S Sketch of Proof The proof consists of two parts In the rst part we use the structural characterization of each problem in MAXSNPO to come up with a corresponding MAXh FUNCTION SAT instancei In the second part we provide an approximation algorithm for MAXh FUNCTION SATi Consider an instance of A We wish to nd that S that maximizes the number of tuples that evaluate to true For the MAXk FUNCTION SAT instance we basically construct a formula for every possible tuplei Since h is a constant there are atmost polynomially many formulaei For each tuple given that G on PCP theorem and approximability5 and the tuple are xed the only variables left in 45 are of the form 5a where is the indicator function for 5 Let k be the number of times that appears in We thus have a MAXkFUNCTION SAT instance where k 16 The optimum assignment to this instance is clearly the indicator function for the optimum S of the instance of A in question Now given MAXk FUNCTION SAT instance here is a l 7 2 16 approxi mation algorithm Partition the variables into sets of size kl For each set in the partition consider the functions that can be evaluated by an assignment to the corresponding k variables By the pigeonhole principle there is an assignment to these k variables that makes atleast 2 116 of these formulae evaluate to true Find such as assignment by going over the 2k possible assignments to these variables This can be done in constant time since k is a constant Since at each step we make atleast 2 116 of the formulae evaluate to true the algorithm has an approximation ratio of 2 116 and is therefore a l 7 24 approximation algorithmi The result follows I 32 1 MAXSNPCompleteness De nition 9 A problem in MAXSNP 239s MAXSNPcomplete all problems in MAXSNP Lreduce to it Theorem 10 MAXS SAT 239s MAXSNP complete Proof Idea The proof uses the construction in Theorem 8 to rst obtain an instance of MAXkFUNCTION SAT for some kl This is then Lreduced to a 3CNE SAT instance 2 has the complete proof 2 proves that several important problems are MAXSNPcompletei These include MAXCUT 4DEGREE INDEPENDENT SET and 4DEGREE NODE COVER among others Next we prove that no MAXSNPcomplete problem has a PTAS unless PNPi This uses the inapproximability result for MAXkFUNCTION SAT which was proved in Section 1 using the PCP Theoremi Theorem 11 No MAXSNP complete problem has a PTAS unless PNP Proof We show this result by showing the existence of a problem in MAXSNP that does not have a PTASi This immediately implies the required re sult because if a MAXSNPcomplete problem has a PTAS then every problem in MAXSNP has a PTASi Recall that Lemma 3 proved that there exists a constant k for which MAX kFUNCTION SAT has a no 12factor approximation algorithm and hence no PTASi Now for every constant k we can construct a problem in MAXSNP using the structural characterization of MAXSNPOi which has number of atomic expressions equal to kl By the construction in Theorem 8 we can reduce this problem to MAXkFUNCTION SATi Lemma 3 applies for atleast one such problemi This proves the result I on PCP theorem and approximability6 References l VlVl Vazirani Approximation Algorithms Springer 2001 2 CH Papadimitriou Computational Complexity Addison Wesley 1993 3 S Arora Bl Barak Complexity Theory A Modern Approach Ca mbridge University Press expected in 2009 A draft is available at http wwwcsprincetonedutheorycomplexity l on PCP theorem and approximability7 Jorg s Graphics Lecture Notes 7 Particles amp Shape Grammars l Particles amp Shape Grammars Kinematics kinematics 7 motion trajectories only 7 without consideration of physical con siderations such as forces7 friction7 inertia etc as opposed to dynamics angle 9 position p p Ft9 forward kinematic equation 9 F 1p inverse kinematic equation We need the set of angles 9 for animation Inverse kinematics is diffcult since F 1 may not be wellde ned there may be several or no way to achive position pi Alternative kegframe animation inbetweening7 or interpolation7 Newtonian Particles position p7 velocity V7 state Y I E R5 as functions of time t Y 7 5 7 Myrna gm To determine the state at time t h from the state at time t need to solve the ODE Examples of force f l gravitational force 0 f 77 0 2 spring force harmonic oscillator between p and q De ne d I p 7 q and d I Hooke s law damping d d d d gtd WP f kspringd do kdamp Jorg s Graphics Lecture Notes 7 Particles amp Shape Grammars 2 repulsing force krepulse f74 i 09701 avoid 0n2 Solving the ODE Euler should not be used But explains basic forwardsolving idea Yt h Yt hYt 0h2 Yt hgY t 0h2 stiff ODE7s Runge Kutta solvers Verlet Integration Implicit Euler Yt h Yt hgYt h t h pm hpt pt Constraints soft approximate and hard eig collision 0 Detection 7 dif cult bounding boxes quadtrees binary space partition ing 0 Reaction Let p I pt 7 pt Where t is the previous time and tquot the time of intersection n is the normal at pt Then the new point is in the direction p215715nngt Jorg s Graphics Lecture Notes 7 Particles amp Shape Grammars This corresponds to an elastic collision no energy loss7 deformation 0 Soft constraints With penalty function min Hp 7 poll Examples 0 bounce c o sphere bouncing intercept With sphere of radius 7 H1 MW AW hH T NormalZ W Jorg s Graphics Lecture Notes Particles amp Shape Grammars Language based models tree shape grammar Example Koch curve snow ake F gt FLFRRFLF gt There exists a unit circle so that the Koch curve inside it has in nite length a N Example F gt FRFFLFF random OPENGLLRfrac Space lling Hilbert curve Fractals self similar structures my Gmth Lectuze New 7 Funds 2 Shape Glammazs thfxamal dmlemmndlm mm mum lnmvw woes Tn u sig e mm A zwoyma 2 mass nactal moun 39 s brawman xardam mamquot dispbamml 7 my cam wnh variance mm m an mewwm yields and human m 127 Mandelbrot set w din Julia 54 z zic 012EC CexAugeJmndelhmtcnsh mmm 54 um 54 m a a m which mm mm mnv ges Jorg s Graphics Lecture Notes 7 Discrete Quadratic Curvature Energies 1 Discrete Quadratic Curvature Energies Motivation thin shell surface bending and stretching as in cloth dynamics also think skin paper etc also 77fairing77 Willmore energy Dynamics For a surface at position xu vt E R3 moving in a velocity eld Vu v t E R3 a mass matrix 6 R3 the Hessian with respect to x denoted by Hess and the energies7 E5 Ep 6 R Newton7s formula for moving the surface S with position x subject to the forces FE and FD is x z I o v t v8 l0 erillFEltxltrgtgtlipltxltrgtgtl 1 FD I 7a1M agHessEb agHessEpV 2 FE 7VE1 7 V13 3 Here the suscript D stands for damping E for energy and b for bending and p for stretching The decisive ingredient in the motion of S is the de nition of the bending energy Ebl The equations may be solved by semiimplicit Euler stepsl Energies The stretching energy Ep penalizes stretching of the surface This is captured by looking at the rst fundamental form1 lli Since E1 is assumed in the model to be at least two orders of magnitude larger than Eb we may assume that the deformation minimizes E1 and is therefore almost isometric ll does not change i We choose to model bending as an integral of mean crvature H rather than Gauss curvature K whose integral is determined by the GaussBonnet formula or total curvature H H3 where H1 and H2 are the principle curvatures l Eb SH2dA Hz H1H2 4 1AXAXRSdA 5 2 5 Here Ax is the LaplaceBeltrami operator ie A V V where the differenti ation gradient is on the surface intrinsic rather than in the ambient space 3i If ll is constant E5 is quadratic in x Discretization The solution of the Dirichlet Problem AI 07Iasg where 85 is the boundary of S and I could be one coordinate of x is a minimizer of the Dirchlet energy Dz I SVzVzdA 6 D is D1 nonegative D2 zero iff zconst D3 scaleinvariant ifz is bivariatei R The area A scales by A2 and V by A 1 if z is scaled by A E 1 webesearch notions you do not know Jorg s Graphics Lecture Notes 7 Discrete Quadratic Curvature Energies 2 We discretize D and z With k samples in each parameter by the quadratic form L 6 R16 for Laplace as Dz m duu utLul 7 Where u E Rk is the vector of samples of 1 To match D17D27D3 L should be Ll symmetric positive semide nite7 L2 vanish exactly When u is constant and be L3 invariant under uniform scalingl note that du e7u e du7 u de7 e 2due and hence for dee 0 L27 due 0 must hold since otherwise due7 ue du7 u adu7 e lt 0 for suitably chosen a E R contradicting nonegativity of d We add L4 Lx 0 if x is part of a planet The solution of the Poisson Problem AI f7 9535 0 is minimized by the continuous analogue of duv utLv ftMV 8 With the mass matrix M providing a scaling of the inner product of f and vi Then M should be Ml symmetric positive definite7 M2 scale by A2 M3 M1 Al Then MilL satis es L17L2 and scales like bivariate Laplace Beltramil Finite Elements For a basis gym Lmn mendA MW someondA 9 S S Linear basis elements can for example be hot functions vertexbased Lagrange functions or CrouzeixRaviart functions midedge connected7 nonconforming Exercise check that MS holds for hat functions7 ilel crytically7 explained in class f100010dA Em Mm Al 6 check for the triangle A0mn With opening angle a and neighbor triangle acros m7 n With angle that l Lmn 7 c0ta cot 10 Abbreviate k I 7 0 t n cosa 10 ill a 11 T T sina I Boundary value problems A secondorder boundary value problem has the form zzt 6yt ay t6 075 C R 9a Luzb BVP 11 AI17 Example De ecting beam 621 11 0 110 WT 0 unique solution y sin 62y ay 0 110 WT 0 no solution for a lt 0 62y y 0 110 WW 5110 5W has two solutions yl sin yg cos a Numerical Solution AI27 Example also see de Boor s Spline Toolbox We want to approximately solve the following nonlinear sti boundary value problem in one variable 6629 92 1 on 01 690 91 0 Picture the solution for smaller 6 To represent the solution we choose degree 3 Cl splines on n 4 equal length pieces We enforce collocation at two Gauss points per piece and use Newton iteration with starting guess y0t t2 7 1 The input consists of l the dilTerential equation 6629 92 1 2 the domain 01 3 the boundary conditions 390 91 0 The algorithm speci es 4 the space of approximating functions 01 splines of degree 3 on n 4 equallength pieces 5 the approximation criterion collocation at two Gauss points per piece 6 the solution technique Newton iteration with starting guess ylol We need to work out the details of the algorithm by determining the knot vec tor x the vector of collocation points It and the initial choice of spline coef cients alol A degree 3 cubic spline has double knots and we need atotal of 10 B splines hence 14 knots Draw it In Matlab notation h 171 breakpts 0 h 1 z sort0 0 breakpts breakpts 1 1 mid breakpts1 n h2 y 1 t tjj1 2n sortmid 7 y2nmid y2n alol splcoelTyOl Then we derive a discretization of the problem 3110 0 W 662m 1 j1lt2ngt 111 0 The linearization of the problem for Newton s method is for j l2n aylitllm 0 21iltjyi1tj Eazylmk j ZIMWW 1 ylml 0 This is a 10 by 10 system of equations We bring it into matrix form At iteration 0 0 1 WWW 1 211mm 0 bm3 E a Wm3 E ymt2n21 211 0 e 0 1 0 0 Now we still need 2n2 matrices M i z t of size 3 by 2n2 that determine thj Dym j Dzthj from the coef cients 1 k l10 We write the application of each MWz t to rowj of Wm as WMMM so WMMM is a 2n 2 by 271 2 matrix Then each Newton step solves m m n1 m W M a b and the iteration stops when Haw 7 all H00 is suf ciently small Starting with E 01 a decreased 6 reqires additional control in the region close to l