Tuesday, January 28, 2020

Comparison Of Rate Of Convergence Of Iterative Methods Philosophy Essay

Comparison Of Rate Of Convergence Of Iterative Methods Philosophy Essay The term iterative method refers to a wide range of techniques that use successive approximations to obtain more accurate solutions to a linear system at each step In numerical analysis it attempts to solve a problem by finding successive  approximations  to the solution starting from an initial guess. This approach is in contrast to  direct methods which attempt to solve the problem by a finite sequence of operations, and, in the absence of  rounding errors, would deliver an exact solution Iterative methods are usually the only choice for non linear equations. However, iterative methods are often useful even for linear problems involving a large number of variables (sometimes of the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power. Stationary methods are older, simpler to understand and implement, but usually not as effective Stationary iterative method are the iterative methods that performs in each iteration the same operations on the current iteration vectors.Stationary iterative methods solve a linear system with an  operator  approximating the original one; and based on a measurement of the error in the result, form a correction equation for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. Examples of stationary iterative methods are the Jacobi method,gauss seidel method  and the  successive overrelaxation method. The Nonstationary methods are based on the idea of sequences of orthogonal vectors Nonstationary methods are a relatively recent development; their analysis is usually harder to understand, but they can be highly effective These are the Iterative method that has iteration-dependent coefficients.It include Dense matrix: Matrix for which the number of zero elements is too small to warrant specialized algorithms. Sparse matrix: Matrix for which the number of zero elements is large enough that algorithms avoiding operations on zero elements pay off. Matrices derived from partial differential equations typically have a number of nonzero elements that is proportional to the matrix size, while the total number of matrix elements is the square of the matrix size. The rate at which an iterative method converges depends greatly on the spectrum of the coefficient matrix. Hence, iterative methods usually involve a second matrix that transforms the coefficient matrix into one with a more favorable spectrum. The transformation matrix is called a  preconditioner. A good preconditioner improves the convergence of the iterative method, sufficiently to overcome the extra cost of constructing and applying the preconditioner. Indeed, without a preconditioner the iterative method may even fail to converge. Rate of Convergence In  numerical analysis, the speed at which a  convergent sequence  approaches its limit is called the  rate of convergence. Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance if we deal with a sequence of successive approximations for an  iterative method as then typically fewer iterations are needed to yield a useful approximation if the rate of convergence is higher. This may even make the difference between needing ten or a million iterations.Similar concepts are used for  discretization  methods. The solution of the discretized problem converges to the solution of the continuous problem as the grid size goes to zero, and the speed of convergence is one of the factors of the efficiency of the method. However, the terminology in this case is different from the terminology for iterative methods. The rate of convergence of an iterative method is represented by mu (ÃŽÂ ¼) and is defined as such:   Suppose the sequence{xn}  (generated by an iterative method to find an approximation to a fixed point) converges to a point  x, then   limn->[infinity] = |xn+1-x|/|xn-x|[alpha]=ÃŽÂ ¼,  where  ÃƒÅ½Ã‚ ¼Ãƒ ¢Ã¢â‚¬ °Ã‚ ¥0 and  ÃƒÅ½Ã‚ ±(alpha)=order of convergence.   In cases where  ÃƒÅ½Ã‚ ±=2 or 3 the sequence is said to have  quadratic  and  cubic convergence  respectively. However in linear cases i.e. when  ÃƒÅ½Ã‚ ±=1, for the sequence to converge  ÃƒÅ½Ã‚ ¼Ã‚  must  be in the interval (0,1). The theory behind this is that for En+1à ¢Ã¢â‚¬ °Ã‚ ¤ÃƒÅ½Ã‚ ¼En  to converge the absolute errors must decrease with each approximation, and to guarantee this, we have to set  0 In cases where  ÃƒÅ½Ã‚ ±=1 and  ÃƒÅ½Ã‚ ¼=1  and  you know it converges (since  ÃƒÅ½Ã‚ ¼=1 does not tell us if it converges or diverges) the sequence  {xn}  is said to converge  sublinearly  i.e. the order of convergence is less than one. If  ÃƒÅ½Ã‚ ¼>1 then the sequence diverges. If  ÃƒÅ½Ã‚ ¼=0 then it is said to converge  superlinearly  i.e. its order of convergence is higher than 1, in these cases you change  ÃƒÅ½Ã‚ ±Ã‚  to a higher value to find what the order of convergence is.  In cases where  ÃƒÅ½Ã‚ ¼Ã‚  is negative, the iteration diverges. Stationary iterative methods Stationary iterative methods are methods for solving a  linear system of equations. Ax=B. where  Ã‚  is a given matrix and  Ã‚  is a given vector. Stationary iterative methods can be expressed in the simple form where neither  Ã‚  nor  Ã‚  depends upon the iteration count  . The four main stationary methods are the Jacobi Method,Gauss seidel method,  successive overrelaxation method  (SOR), and   symmetric successive overrelaxation method  (SSOR). 1.Jacobi method:- The Jacobi method is based on solving for every variable locally with respect to the other variables; one iteration of the method corresponds to solving for every variable once. The resulting method is easy to understand and implement, but convergence is slow. The Jacobi method is a method of solving a  matrix equation  on a matrix that has no zeros along its main diagonal . Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation  method of  matrix diagnalization. The Jacobi method is easily derived by examining each of the  Ã‚  equations in the linear system of equations  Ã‚  in isolation. If, in the  th equation solve for the value of  Ã‚  while assuming the other entries of  Ã‚  remain fixed. This gives which is the Jacobi method. In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be expressed with matrices  as where the matrices  ,  , and  Ã‚  represent the diagnol, strictly lower triangular, and  strictly upper triangular  parts of  , respectively Convergence:- The standard convergence condition (for any iterative method) is when the  spectral radius  of the iteration matrix à Ã‚ (D  Ãƒ ¢Ã‹â€ Ã¢â‚¬â„¢ 1R) D is diagonal component,R is the remainder. The method is guaranteed to converge if the matrix  A  is strictly or irreducibly  diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms: The Jacobi method sometimes converges even if these conditions are not satisfied. 2. Gauss-Seidel method:- The Gauss-Seidel method is like the Jacobi method, except that it uses updated values as soon as they are available. In general, if the Jacobi method converges, the Gauss-Seidel method will converge faster than the Jacobi method, though still relatively slowly. The Gauss-Seidel method is a technique for solving the  Ã‚  equations of the  linear system of equations  Ã‚  one at a time in sequence, and uses previously computed results as soon as they are available, There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the computations appear to be serial. Since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the  Jacobi method. Secondly, the new iterate  Ã‚  depends upon the order in which the equations are examined. If this ordering is changed, the  components  of the new iterates (and not just their order) will also change. In terms of matrices, the definition of the Gauss-Seidel method can be expressed as where the matrices  ,  , and  Ã‚  represent the  diagonal, strictly lower triangular, and strictly upper triangular  parts of   A, respectively. The Gauss-Seidel method is applicable to strictly diagonally dominant, or symmetric positive definite matrices   A. Convergence:- Given a square system of  n  linear equations with unknown  x: The convergence properties of the Gauss-Seidel method are dependent on the matrix  A. Namely, the procedure is known to converge if either: A  is symmetric  positive definite, or A  is strictly or irreducibly  diagonally dominant. The Gauss-Seidel method sometimes converges even if these conditions are not satisfied. 3.Successive Overrelaxation method:- The successive overrelaxation method (SOR) is a method of solving a  linear system of equations  Ã‚  derived by extrapolating the  gauss-seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component, where  Ã‚  denotes a Gauss-Seidel iterate and  Ã‚  is the extrapolation factor. The idea is to choose a value for  Ã‚  that will accelerate the rate of convergence of the iterates to the solution. In matrix terms, the SOR algorithm can be written as where the matrices  ,  , and  Ã‚  represent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of  , respectively. If  , the SOR method simplifies to the  gauss-seidel method. A theorem due to Kahan shows that SOR fails to converge if  Ã‚  is outside the interval  . In general, it is not possible to compute in advance the value of  Ã‚  that will maximize the rate of convergence of SOR. Frequently, some heuristic estimate is used, such as  Ã‚  where  Ã‚  is the mesh spacing of the discretization of the underlying physical domain. Convergence:- Successive Overrelaxation method may converge faster than Gauss-Seidel by an order of magnitude. We seek the solution to set of linear equations   In matrix terms, the successive over-relaxation (SOR) iteration can be expressed as where  ,  , and  Ã‚  represent the diagonal, lower triangular, and upper triangular parts of the coefficient matrix  ,  Ã‚  is the iteration count, and  Ã‚  is a relaxation factor. This matrix expression is not usually used to program the method, and an element-based expression is used Note that for  Ã‚  that the iteration reduces to the  gauss-seidel  iteration. As with the  Gauss seidel method, the computation may be done in place, and the iteration is continued until the changes made by an iteration are below some tolerance. The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive definite matrices it can be proven that  Ã‚  will lead to convergence, but we are generally interested in faster convergence rather than just convergence. 4.Symmetric Successive overrelaxation:- Symmetric Successive Overrelaxation (SSOR) has no advantage over SOR as a stand-alone iterative method; however, it is useful as a preconditioner for nonstationary methods The symmetric successive overrelaxation (SSOR) method combines two  successive overrelaxation method  (SOR) sweeps together in such a way that the resulting iteration matrix is similar to a symmetric matrix it the case that the coefficient matrix  Ã‚  of the linear system  Ã‚  is symmetric. The SSOR is a forward SOR sweep followed by a backward SOR sweep in which the  unknowns  are updated in the reverse order. The similarity of the SSOR iteration matrix to a symmetric matrix permits the application of SSOR as a preconditioner for other iterative schemes for symmetric matrices. This is the primary motivation for SSOR, since the convergence rate is usually slower than the convergence rate for SOR with optimal  .. Non-Stationary Iterative Methods:- 1.Conjugate Gradient method:- The conjugate gradient method derives its name from the fact that it generates a sequence of conjugate (or orthogonal) vectors. These vectors are the residuals of the iterates. They are also the gradients of a quadratic functional, the minimization of which is equivalent to solving the linear system. CG is an extremely effective method when the coefficient matrix is symmetric positive definite, since storage for only a limited number of vectors is required. Suppose we want to solve the following   system of linear equations Ax  =  b where the  n-by-n  matrix  A  is  symmetric  (i.e.,  AT  =  A),  positive definite  (i.e.,  xTAx  > 0 for all non-zero vectors  x  in  Rn), and  real. We denote the unique solution of this system by  x*. We say that two non-zero vectors  u  and  v  are  conjugate  (with respect to  A) if Since  A  is symmetric and positive definite, the left-hand side defines an  inner product So, two vectors are conjugate if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if  u  is conjugate to  v, then  v  is conjugate to  u. Convergence:- Accurate predictions of the convergence of iterative methods are difficult to make, but useful bounds can often be obtained. For the Conjugate Gradient method, the error can be bounded in terms of the spectral condition number  Ã‚  of the matrix  . ( if  Ã‚  and  Ã‚  are the largest and smallest eigenvalues of a symmetric positive definite matrix  , then the spectral condition number of  Ã‚  is  . If  Ã‚  is the exact solution of the linear system  , with symmetric positive definite matrix  , then for CG with symmetric positive definite preconditioner  , it can be shown that where  Ã‚  , and   . From this relation we see that the number of iterations to reach a relative reduction of  Ã‚  in the error is proportional to  . In some cases, practical application of the above error bound is straightforward. For example, elliptic second order partial differential equations typically give rise to coefficient matrices  Ã‚  with  Ã‚  (where  Ã‚  is the discretization mesh width), independent of the order of the finite elements or differences used, and of the number of space dimensions of the problem . Thus, without preconditioning, we expect a number of iterations proportional to  Ã‚  for the Conjugate Gradient method. Other results concerning the behavior of the Conjugate Gradient algorithm have been obtained. If the extremal eigenvalues of the matrix  Ã‚  are well separated, then one often observes so-called; that is, convergence at a rate that increases per iteration. This phenomenon is explained by the fact that CG tends to eliminate components of the error in the direction of eigenvectors associated with extremal eigenvalues first. After these have been eliminated, the method proceeds as if these eigenvalues did not exist in the given system,  i.e., the convergence rate depends on a reduced system with a smaller condition number. The effectiveness of the preconditioner in reducing the condition number and in separating extremal eigenvalues can be deduced by studying the approximated eigenvalues of the related Lanczos process. 2. Biconjugate Gradient Method-The Biconjugate Gradient method generates two CG-like sequences of vectors, one based on a system with the original coefficient matrix , and one on . Instead of orthogonalizing each sequence, they are made mutually orthogonal, or bi-orthogonal. This method, like CG, uses limited storage. It is useful when the matrix is nonsymmetric and nonsingular; however, convergence may be irregular, and there is a possibility that the method will break down. BiCG requires a multiplication with the coefficient matrix and with its transpose at each iteration. Convergence:- Few theoretical results are known about the convergence of BiCG. For symmetric positive definite systems the method delivers the same results as CG, but at twice the cost per iteration. For nonsymmetric matrices it has been shown that in phases of the process where there is significant reduction of the norm of the residual, the method is more or less comparable to full GMRES (in terms of numbers of iterations). In practice this is often confirmed, but it is also observed that the convergence behavior may be quite irregular  , and the method may even break down  . The breakdown situation due to the possible event that  Ã‚  can be circumvented by so-called look-ahead strategies. This leads to complicated codes. The other breakdown  Ã‚  situation,  , occurs when the  -decomposition fails, and can be repaired by using another decomposition. Sometimes, breakdown  Ã‚  or near-breakdown situations can be satisfactorily avoided by a restart  Ã‚  at the iteration step immediately before the breakdown step. Another possibility is to switch to a more robust method, like GMRES.  Ã‚   3. Conjugate Gradient Squared (CGS  ). The Conjugate Gradient Squared method is a variant of BiCG that applies the updating operations for the -sequence and the -sequences both to the same vectors. Ideally, this would double the convergence rate, but in practice convergence may be much more irregular than for BiCG, which may sometimes lead to unreliable results. A practical advantage is that the method does not need the multiplications with the transpose of the coefficient matrix. often one observes a speed of convergence for CGS that is about twice as fast as for BiCG, which is in agreement with the observation that the same contraction operator is applied twice. However, there is no reason that the contraction operator, even if it really reduces the initial residual  , should also reduce the once reduced vector  . This is evidenced by the often highly irregular convergence behavior of CGS  . One should be aware of the fact that local corrections to the current solution may be so large that cancelation effects occur. This may lead to a less accurate solution than suggested by the updated residual. The method tends to diverge if the starting guess is close to the solution.  Ã‚   4 Biconjugate Gradient Stabilized (Bi-CGSTAB  ). The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the -sequence in order to obtain smoother convergence than CGS. Bi-CGSTAB often converges about as fast as CGS, sometimes faster and sometimes not. CGS can be viewed as a method in which the BiCG contraction operator is applied twice. Bi-CGSTAB can be interpreted as the product of BiCG and repeatedly applied GMRES. At least locally, a residual vector is minimized  , which leads to a considerably smoother  Ã‚  convergence behavior. On the other hand, if the local GMRES step stagnates, then the Krylov subspace is not expanded, and Bi-CGSTAB will break down  . This is a breakdown situation that can occur in addition to the other breakdown possibilities in the underlying BiCG algorithm. This type of breakdown may be avoided by combining BiCG with other methods,  i.e., by selecting other values for  Ã‚   One such alternative is Bi-CGSTAB2  ; more general approaches are su ggested by Sleijpen and Fokkema. 5..Chebyshev   Iteration. The Chebyshev Iteration recursively determines polynomials with coefficients chosen to minimize the norm of the residual in a min-max sense. The coefficient matrix must be positive definite and knowledge of the extremal eigenvalues is required. This method has the advantage of requiring no inner products. Chebyshev Iteration is another method for solving nonsymmetric problems . Chebyshev Iteration avoids the computation of inner products  Ã‚  as is necessary for the other nonstationary methods. For some distributed memory architectures these inner products are a bottleneck  Ã‚  with respect to efficiency. The price one pays for avoiding inner products is that the method requires enough knowledge about the spectrum of the coefficient matrix  Ã‚  that an ellipse enveloping the spectrum can be identified  ; however this difficulty can be overcome via an adaptive construction  developed by Manteuffel  , and implemented by Ashby  . Chebyshev iteration is suitable for any non symmetric linear system for which the enveloping ellipse does not include the origin. Convergence:- In the symmetric case (where  Ã‚  and the preconditioner  Ã‚  are both symmetric) for the Chebyshev Iteration we have the same upper bound as for the Conjugate Gradient method, provided  Ã‚  and  Ã‚  are computed from  Ã‚  and  Ã‚  (the extremal eigenvalues of the preconditioned matrix  ). There is a severe penalty for overestimating or underestimating the field of values. For example, if in the symmetric case  Ã‚  is underestimated, then the method may diverge; if it is overestimated then the result may be very slow convergence. Similar statements can be made for the nonsymmetric case. This implies that one needs fairly accurate bounds on the spectrum of  Ã‚  for the method to be effective (in comparison with CG or GMRES).  Ã‚   Acceleration of convergence Many methods exist to increase the rate of convergence of a given sequence, i.e. to transform a given sequence into one converging faster to the same limit. Such techniques are in general known as series acceleration. The goal of the transformed sequence is to be much less expensive to calculate than the original sequence. One example of series acceleration is Aitkens delta -squared process.

Monday, January 20, 2020

Carl Jung Essay -- essays research papers

Carl Jung 1875 - 1961 Anyone who wants to know the human psyche will learn next to nothing from experimental psychology. He would be better advised to abandon exact science, put away his scholar's gown, bid farewell to his study, and wander with human heart throught the world. There in the horrors of prisons, lunatic asylums and hospitals, in drab suburban pubs, in brothels and gambling-hells, in the salons of the elegant, the Stock Exchanges, socialist meetings, churches, revivalist gatherings and ecstatic sects, through love and hate, through the experience of passion in every form in his own body, he would reap richer stores of knowledge than text-books a foot thick could give him, and he will know how to doctor the sick with a real knowledge of the human soul. -- Carl Jung Freud said that the goal of therapy was to make the unconscious conscious. He certainly made that the goal of his work as a theorist. And yet he makes the unconscious sound very unpleasant, to say the least: It is a cauldron of seething desires, a bottomless pit of perverse and incestuous cravings, a burial ground for frightening experiences which nevertheless come back to haunt us. Frankly, it doesn't sound like anything I'd like to make conscious! A younger colleague of his, Carl Jung, was to make the exploration of this "inner space" his life's work. He went equipped with a background in Freudian theory, of course, and with an apparently inexhaustible knowledge of mythology, religion, and philosophy. Jung was especially knowledgeable in the symbolism of complex mystical traditions such as Gnosticism, Alchemy, Kabala, and similar traditions in Hinduism and Buddhism. If anyone could make sense of the unconscious and its habit of revealing itself only in symbolic form, it would be Carl Jung. He had, in addition, a capacity for very lucid dreaming and occasional visions. In the fall of 1913, he had a vision of a "monstrous flood" engulfing most of Europe and lapping at the mountains of his native Switzerland. He saw thousands of people drowning and civilization crumbling. Then, the waters turned into blood. This vision was followed, in the next few weeks, by dreams of eternal winters and rivers of blood. He was afraid that he was becoming psychotic. But on August 1 of that year, World War I began. Jung felt that there had been a connection, somehow, between hims... ...ates, they are a little dangerous, especially economically. They are good at analysis and make good entrepreneurs. They do tend to play at oneupmanship. ESFJ (Extroverted feeling with sensing): These people like harmony. They tend to have strong shoulds and should-nots. They may be dependent, first on parents and later on spouses. They wear their hearts on their sleeves and excel in service occupations involving personal contact. ESFP (Extroverted sensing with feeling): Very generous and impulsive, they have a low tolerance for anxiety. They make good performers, they like public relations, and they love the phone. They should avoid scholarly pursuits, especially science. ESTJ (Extroverted thinking with sensing): These are responsible mates and parents and are loyal to the workplace. They are realistic, down-to-earth, orderly, and love tradition. They often find themselves joining civic clubs! ESTP (Extroverted sensing with thinking): These are action-oriented people, often sophisticated, sometimes ruthless -- our "James Bonds." As mates, they are exciting and charming, but they have trouble with commitment. They make good promoters, entrepreneurs, and con artists.

Sunday, January 12, 2020

Dramatic Irony in Oedipus Essay

In the play written by Sophocles, Oedipus the King, there are several instances of irony. Dramatic irony, or tragic irony as some critics would prefer to call it, usually means a situation in which the character of the play has limited knowledge and says or does something in which they have no idea of the significance. The audience, however, already has the knowledge of what is going to occur or what the consequences of the characters actions will be. The degree of irony and the effect it has depends upon the readers’ grasp and recognition of some discrepancy between two things. Our first taste of dramatic irony comes very early into the play when Oedipus vows to bring to justice the killer of Laius, which is in reality himself. When he learns that the bringing of justice of Laius’ killer will rid the city of a terrible plague, he sets forth with a plan to track down the killer. Oedipus begins to curse the killer and vows: Oedipus: As for the criminal, I pray to God – Whether it be a lurking thief, or one of a number – I pray that that man’s life be consumed in evil and wretchedness. And as for me, this curse applies no less (968) This is very ironic, as Oedipus is indeed, without knowledge of the truth, talking about himself. Another example of dramatic irony is the power of fate and Oedipus’ powerlessness against it. Throughout the play we are aware of Oedipus’ fate and we realize there is nothing that he can do to change it. When Oedipus tells his city after listening to their plea for help against the terrible sickness and plague that has taken over the city: Oedipus: I know that you are deathly sick; and yet, Sick as you are, not one is as sick as I. (963) The audience understands the truth and the irony in that statement. Oedipus should not worry about himself becoming ‘sick’ for he is already infested with the sickness. A third example of the irony of Oedipus is the fact that Oedipus seemed to be blind and deaf to the truth. He appears to be on a valiant search for the truth and justice of the killer of Laius, yet refuses to hear the truth when it is spoken to him. In order to hear the truth Oedipus needed to be able to hear and interpret it, yet he only heard what he wanted to hear. Therefore rendering him unable to understand the mystery of who he truly was. In this play there seems to be a constant string of ironies throughout. Oedipus is in denial of the truth. In his dramatic speeches he misconstrues the information that he has been given by Teiresias, as well as Creon and Iocaste. The horrifying realization that the prophecy of the Sphinx is in fact the truth, causes Oedipus to blind himself. The audience therefore pities him, which is a result of the use of dramatic irony. The use of irony in a play allows the writer to make their audience want to see how the events which are occurring, mentally affect the main character, even if they already know how the story will end, as in Oedipus the King. Kennedy, X.J., and Gioia Dana. â€Å"Oedipus the King† Literature: An Introduction to Fiction, Poetry, and Drama. 2nd edition. New York: Addison Wesley Longman, 2000. 960-1005.

Friday, January 3, 2020

Essay about Power and Greed - Macbeth - 1736 Words

Power and Greed: The Driving Force behind the Story of Macbeth The rise of an individual and the gain of power can often be intoxicating. This control placed in the hands of one can often ignite thoughts and actions of greed found deep inside. This can often be seen in the history of civilization as countless leaders have neglected the good of their people to fuel their own selfish desires. Lord Acton once expressed, â€Å"Power tends to corrupt, and absolute power corrupts absolutely.† This is embodied in the play, Macbeth by William Shakespeare as numerous characters abuse power to manipulate and destroy the lives of many. Though the examination of, Lady Macbeth, the three witches and Macbeth, it becomes apparent how the gain and loss of†¦show more content†¦The corruption of power through knowledge can become large when the sensitivity of this information is not realized. This is shown in the witches’ use of their knowledge of Macbeth’s fate; their co nsideration for the possible consequences does not seem to be visible throughout the entire play. This is also pointed out by Hecate in the play as she exclaims: To trade and traffic with Macbeth In riddles, and affairs of death; And I, the mistress of your charms, The close contriver of all harms, Was never call’d to bear my part (3.5. 4-9). The scene shows the three witches lack of consideration of the outcome of their prophecy as they did not consult with their higher power, Hecate over whether they should tell Macbeth of his prophecy. Their thoughtless actions caused for many deaths and much corruption amongst themselves and the entire country of Scotland. The witches’ gain of power furthermore corrupted them as they did not tell Macbeth his full prophecy. When the witches first tell Macbeth of his fate the first witch states, â€Å"All hail Macbeth! Hail to thee, thane of Glamis!† (1.3. 48-50) This is then followed by the second witch claiming â€Å"all hail Macbeth! Hail to thee, thane of Cawdor!† and finally the third witch states, â€Å"All hail Macbeth! That shalt be king hereafter!† the prophecy given can be seen as false as they suggest that Macbeth is to be king for a long time when they state â€Å"hereafter†Show MoreRelatedThe Power Of Greed In Macbeth1488 Words   |  6 Pagesplay, Macbeth, there is blood, power and greed; all of which can be read with literal and metaphorical interpretations. They intertwine at different points in the story and have different effects on each of the characters including: Macbeth, Lady Macbeth, Malcom, and Macduff. These three ideas create a cycle throughout the story, particularly for Macbeth, as greed leading to the spilling of blood, which can give someone power is ultimately his demise. When Shakespeare first mentions Macbeth in theRead MoreMan of Greed and Power: The Tragedy of Macbeth1089 Words   |  5 PagesAmbition for great power leads to the downfall of Lady Macbeth and Macbeth. Contributing to the downfall and demise of Macbeth, three sinister witches plan to foil Macbeth through telling him prophecies of his future. But, through the freewill of Lady Macbeth and Macbeth they paved their own road to destruction. Lady Macbeth is a woman who is not mentally strong enough to commit a murder but is mentally capable of persuading someone into committing the crime for her. Macbeth is gullible at firstRead MoreEssay On Greed In The Great Gatsby1101 Words   |  5 Pages The paths of greed and vanity will always lead to ones downfall. The character of Myrtle Wilson from the novel â€Å"The Great Gatsby† written by F. Scott Fitzgerald, and Lady Macbeth from the play â€Å"Macbeth† by William Shakespeare both successfully demonstrate the deadly sin of greed through their immoral actions to gain their own personal desire for wealth and power, eventually leading to their downfall. To begin, the character of Myrtle Wilson carries out an affair with Tom Buchanan toRead MoreLord of the Flies Macbeth889 Words   |  4 Pagesabout power and how is it being said? Power can change people in a way that is incomprehensible either for good or for evil. Power can make one so greedy that someone will do anything for it and won’t let anyone, or thing stand in their way. Macbeth by William Shakespeare portrays both the positive and negative uses on Power through the main characters. Macbeth’s greed of power allowed him to exercise abuse and ultimately he was corrupted and destroyed by power. Lady Macbeth used power in a positiveRead MoreMacbeth Ambitions967 Words   |  4 Pageshas â€Å"an earnest desire for some type of achievement or distinction, (ex) power, honor, fame, or wealth. The willingness to strive for its attainment†. From this definition I do not even believe that Macbeth really had any ambition of his own. I do not think his ambition was not Macbeth’s greatest weakness but more is gullibility and being able to be swayed into the directions of others. In the story of Macbeth of Macbeths miss fortune is ca used by first the prophecy of the witches and secondly ofRead MoreParallels Between Macduff and Macbeth Essay769 Words   |  4 Pages Parallels Between Macduff and Macbeth In humans, greed will often play a more pronounced role in their actions then morals. In the Shakespearian play Macbeth we see how far greed and ambition has crushed the stability of Scotland and destroyed the lives of multiple lords and innocents. At first we see Macbeth as the glorious hero who â€Å"unseam’d† (1.2.23) the traitor Macdonwald in the defense of his King and Country, yet turns into a king who is powerless and paranoid. The downfall of the usurperRead MoreAmbition And Ambition In Macbeth1240 Words   |  5 Pagesintroduced to Macbeth in the play, King Duncan of Scotland is hearing how General Macbeth and General Banquo subdued Norwegian forces and arrested the Thane of Cawdor for treason. He seems to have a high moral standing and is a strong character, but Macbeth and Banquo meet three witches in the woods, they refer to him as Baron of Glamis, Baron of Cawdor, and then king. They also say that although Macbeth will be king, Banquos sons will become kin gs as well, even if Banquo will not. Macbeth was alreadyRead MoreGreed in Macbeth Essay1018 Words   |  5 PagesGreed is the excessive desire to acquire or possess more, and it is also one of the biggest creators of tragedy. This is so vividly shown in both the novel The Sun Also Rises and in the play Macbeth. In The Sun Also Rises, this greed is directed toward a person, Lady Brett Ashley. Five men; Mike, Jake, Pedro, Bill, and the Count, are fighting to be with her throughout the book. In Macbeth, this greed is directed toward power as Macbeth wanted to become King, and what he does to become it. HoweverRead MoreThe Destruction of Guilt in Macbeth by William Shakespeare1168 Words   |  5 Pages In William Shakespeares play Macbeth he uses many forms of imagery, he uses this imagery to outline major themes in the book. The imagery used in the play Macbeth makes the audience immediately captivated and helps the audience connect to the characters in the play. Two major themes will be outlined in this essay and those themes will be supported and outlined by three motifs: ambition/greed, fate and hallucinations. A profound theme throughout the book Macbeth is the underlying inevitabilityRead MoreGreed In Macbeth1034 Words   |  5 PagesWilliam shakespeares play, Macbeth, there is a constant struggle for power that is displayed by Macbeth. The play demonstrates one of human’s strongest nature which seems to be the desire for power. This play is entirely based on Macbeth’s greed for power. As Macbeth gained more power, his behavior shifts from being a loyal and noble warrior to a power hungry man. He is a prime example of a character that is struggling to free himself from powers of others or seeks to gain power over others. Throughout