Saturday, January 25, 2020

Post-Traumatic Stress Disorder and Lucid Dreaming Therapy

Post-Traumatic Stress Disorder and Lucid Dreaming Therapy Post-Traumatic Stress Disorder (PTSD) has seen a steep incline in recent years, affecting over 1 adult in every 12 (National Comorbidity Survey Replication [NCS-R], 2001-2003). Per the American Psychiatric Association, it is defined as a psychiatric disorder that can occur in people who have experienced or witnessed a traumatic event such as a natural disaster, a serious accident, a terrorist act, war/combat, rape or other violent personal assault (APA, 2015). An adult diagnosed with PTSD can arguably obtain normality in behaviour and mindset through various forms of psychotherapy and medication, and resultantly recover from the disorder. Lucid Dreaming Therapy (LDT) is becoming an increasingly large influence in the format of exposure therapy, which begs the question, to what extent can Lucid Dreaming be effective in treating the identifying characteristics of PTSD? Exposure Therapy is a format of behavioural therapy in which a patient re-enters the setting in which they experienced the initial trauma, whether it be virtually, imaginatively or physically, and attempts to confront the troubling factor (APA, Division 12). Exposure therapy is advertised as a treatment component range for several problems, including Phobias, Social Anxiety Disorder and PTSD. However, the difference with the latter is the inability to physically recreate the event in the exact manner that it originally occurred, with all smells, sounds and emotions originally experienced. The goal of Lucid Dreaming Therapy (LDT) is to reduce the detriment caused as a result of PTSD in order to enable a suffering adult to best function independently and successfully in various environments (Green McCreery, 1994; Halliday, 1988; LaBerge, 1985; LaBerge Rheingold, 1990; Tholey, 1988). LDT is most successful in combination with early intervention. Treatment closely after a traumatic eve nt allows for a greater possibility to alleviate suffering from effects such as nightmares and depression. Characterizing Description of PTSD PTSD is classified as a trauma and stressor related psychiatric disorder, largely due to four common features that appear from three months to years after the occurrence of a traumatic event. These characteristics are intrusive memories, avoidance, negative changes in thought and mood, and changes in emotional reactions (DSM-IV-TR to DSM-5). The diagnostic features of PTSD best described in the Diagnostic and Statistical Manual of Mental Disorders: DSM-5. At least eight of the criteria must be present for the diagnosis of PTSD. Of these eight, additional requirements exist in each area. Exposure to death, violence or injury is one key feature of PTSD, referred to as stressor. This can be marked through direct exposure, witnessing the trauma, learning that a relative or close friend was exposed to a trauma or Indirect exposure to aversive details of the trauma. A patient must have one of these social criterions to be marked as a patient of PTSD. Symptoms of intrusion are another foundation of PTSD. A persistently recurring format of re-experiencing the trauma is characteristic. Such symptoms include recurrent or involuntary and intrusive memories, traumatic nightmares, dissociative reactions such as flashbacks ranging on a continuum of brief episodes to loss of consciousness, intense or prolonged distress after exposure to traumatic reminded, as well as marked physiological reactivity after exposure to trauma-related stimuli. Persistent effortful avoidance of distressing trauma-related stimuli after the event is another core to PTSD. This can be marked through trauma-related thoughts or feelings in addition or replacement to trauma-related external reminders (e.g. people, places, objects or activities). Negative alterations in cognition are often a bi-product of PTSD and therefore a key factor in diagnosis. These alterations include; dissociative amnesia in relation to the key features of the traumatic event, persistent or distorted negative beliefs and expectations about oneself or the world, persistent blame of oneself or others for causing the traumatic event or for resulting consequences, persistent negative trauma-related emotions, markedly diminished interest in pre-traumatic significant activities, a sense of alienation/detachment from others, and a persistent inability to experience positive emotions. A patient must have at least two of these symptoms to be diagnosed with PTSD. There are many well-known associated features and disorders with PTSD. Insomnia, ranging from mild to profound, is prevalent in most cases. Irritability, aggression, self-destructive actions or recklessness are behavioural symptoms that may accompany PTSD. Additionally, hypervigilance and an exaggerated startle response, sometimes accompanied by problems in concentration are examples of alterations in arousal and reactivity that may have begun or worsened after the traumatic event. Two of these alterations are necessary for diagnosis of PTSD. Other factors such as duration/persistence of symptoms, functional impairment and confirmation of exclusion (verification that disturbance is not due to medication, substance use, or other illness) are key in the diagnosis of PTSD. By definition, the onset of PTSD requires that the given symptoms occur for a minimum of a month. Although to a comparatively minor extent, most symptoms are present directly after the trauma and will continuously dev elop throughout time. PTSD is two to three times more prevalent in females than to males. An experience of sexual assault or child sexual abuse is more likely amongst women in comparison to accidents, physical assault, combat, disaster or witness to death/injury being the likely trauma for men. The median number of Post-Traumatic Stress Disorder sufferers is 7 to 8 per 100 individuals, with reported ranges ranging from 7 20 per 100 individuals, the latter being combat related. The most recent statistic shows up to 8 in 100 individuals may be diagnosed with autism (DSM-V-TR). As the direct/chemical cause of PTSD is debatable, the reason for recent increase is, while speculative, currently unknown. Methods of Lucid Dreaming Therapy (LDT) Lucid Dreaming Therapy (LDT) is an upcoming format of treatment that has been specifically researched for application in relation to the treatment of PTSD. Lucid Dreaming is defined as the state in which an individual is aware that they are dreaming and subsequently obtain control over their dreams. The phenomenon of lucid dreaming dates back centuries and quite possibly millennia, with reports of its use dating back to the eighth century, in the form of what was known to be Dream Yoga. With scientific confirmation of the phenomenon in the late 20th Century, therapeutic possibilities began to be brought to light. Lucid Dreaming Treatment (LDT) arose from this idea as an alternative cognitive-restructuring technique, but only a small amount of research has been conducted on the topic, composed mainly of case studies (Abramovitch, 1995; Brylowski, 1990; Spoormaker van den Bout, 2006; Spoormaker, van den Bout, Meijer, 2003; Zadra Pihl, 1997). Nightmares are defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) to be extremely frightening and anxiety-provoking dreams which awaken the dreamer, followed by full alertness (APA, 2000). Although this is the current definition used as a diagnostic criterion, according to DSM-IV-TR, and in this essay, it should be mentioned that some have challenged this definition (Spoormaker, Schredl, van den Bout, 2005; Zadra, Pilon, Donderi, 2006). In the adult population, as many as 70 % of individuals report at least an occasional nightmare, and 2-5% suffer from recurrent nightmares (Lancee, Spoormaker, Krakow, van den Bout, 2008). Suffering from recurrent nightmares causes distress in waking life and can result in both occupational and social dysfunction. The fear and anxiety which the nightmare provokes linger when the dreamer awakens from it, which may prevent the individual from returning to sleep due to the fear of re-experience. It has been suggested that by becoming lucid during the nightmare, the dreamer can take control of the threatening situation and change the course of the nightmare, thus possibly alleviating feelings of fear and anxiety. This could possibly result in reduced nightmare frequency, relieving the nightmare sufferer from its negative effects both in sleep and waking life (Gackenbach Bosveld, 1989; Gavie Revonsuo, 2010; Green McCreery, 1994; Halliday, 1988; LaBerge, 1985; LaBerge Rheingold, 1990; Tholey, 1988). In LDT, the participants describe their nightmare and are then introduced to the concept of LD, the possibility to become conscious while dreaming and to be able to alter the content at will. The participants are then taught different LD induction techniques, such as choosing a recurrent cue within their dreams to be a signal of being in the dream state, or questioning the nature of reality several times during the day, asking themselves Am I dreaming? The participants then choose an alternative, more positive scenario of the nightmare, focusing on the content they wish to alter whilst lucid (Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997). A Pilot Study conducted by the Department of Clinical Psychology of Utrecht University in the Netherlands aimed to evaluate the effects of LDT on recurrent nightmares which is an identifying characteristic of PTSD. The participants of the study included 23 individuals (16:7, Female: Male) who have recurrent episodes of nightmares. The requirement from the participants of the study was to fill out a questionnaire regarding their sleep and Diagnostic Traits of PTSD. These individuals were randomly divided into 3 groups; 8 participants received one 2-hour individual LDT session, 8 participants received one 2-hour group LDT session, and 7 participants were placed on the waiting list. LDT consisted of exposure, mastery, and lucid dreaming exercises to train their mind to become more self aware. Participants then filled out the same questionnaires 12 weeks after the intervention as a follow-up. It was found that by the follow-up, nightmare frequency of both treatment groups had shown a dec rease. There were no significant changes observed in sleep quality and severity of posttraumatic stress disorder symptom. This led to the conclusion that while LDT seems effective in reducing the frequency of nightmares, the primary therapeutic components of exposure, mastery, or lucidity remain unclear. The results of utilizing LD as treatment are consistent, indicating that LDT is effective for reducing nightmare frequency (Abramovitch, 1995; Brylowski, 1990; Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997). A one-year follow-up showed that four out of five participants, who prior to the treatment suffered from nightmares once every few days, went down to once every few months or no longer had any nightmares (Zadra Pihl). In another study the treatment consisted of one two-hour session either individually, in group or, and as the control condition, being on a waiting list where no treatment was received. The participants had suffered from nightmares for over one year, at least once a week. The 12 week follow-up showed that nightmare frequency decreased in both treatment conditions, which was not the case for the control group (Spoormaker van den Bout). For some participants LDT was also effective in reducing non-recurrent nightmares with differing conte nts (Zadra Pihl). Some of the participant had also subjectively reported slightly improved sleep quality after LDT (Spoormaker et al.) and that dream lucidity resulted in higher positive psychological elements which were also reflected in waking life (Zadra Pihl). Similar effects have been reported by Brylowski and Abramovitch. The studies showed that while nightmare frequency was reduced following LDT, not all of the participants displayed in becoming lucid and to lucidly alter the content of the dream. One of the reasons attributed to this being the mere feeling of control which is necessary to LDT. Being able to master the nightmare and not being its victim seems to play an equally vital role as the actual altering of the content (Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997). Experiencing a traumatic event of extremely frightening and life-threatening character may, for some people, develop into Posttraumatic Stress Disorder (PTSD). PTSD is a severe anxiety disorder in which the symptoms are collected under three clusters: intrusive/re-experiencing symptoms, avoidance symptoms and hyper arousal symptoms. Those suffering from PTSD endure highly disturbing recollections of the event. They display heightened sensitivity towards both internal and external stimuli which resemble or in any way symbolize some aspect of the original event. When confronted with similar symbols or conditions, they experience emotional numbness and sleep difficulties. The individuals self defence mechanism leads them to avoid all such stimuli which may remind them of the event. Hence those suffering from PTSD often experience constant conflicts in interpersonal relationships which can be attributed to heightened sensitivity as a result of PTSD. It is not uncommon for them to display recurring avoidance patterns in occupational situations which may remind them of the traumatic event. (APA, 2000). In addition to heightened sensitivity and severe anxiety posttraumatic nightmares that replay or indirectly symbolize, the original traumatizing event constitute the most frequent symptom in PTSD. (APA, 2000). It has been estimated that up to 60-80% of PTSD patients suffer from posttraumatic nightmares (Spoormaker, 2008). However, research has shown that treating PTSD does not necessarily reduce nightmare frequency (Spoormaker; Spoormaker Montgomery, 2008). In contrast, Imagery Rehearsal Therapy (IRT), a treatment focusing on alleviating nightmare frequency in PTSD also reduces general PTSD symptom severity (Krakow Moore, 2007). Research has also shown that nightmares and disturbed sleep may be a risk factor for developing and maintaining PTSD (Mellman Hipolito, 2006). Due to these findings, Spoormaker (2008) and Spoormaker and Montgomery (2008) stated that posttraumatic nightmares ought not to be viewed as a secondary symptom but rather as a central characteristic in the advancem ent of post traumatic stress disorder. Their series of research, studies and findings led them to infer that posttraumatic nightmares might develop into a disorder of its own and therefore demands specific treatment. LDT is effective in reducing the frequency of recurrent nightmares (Abramovitch, 1995; Brylowski, 1990; Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997), and thus it has been suggested that LDT could be a valuable supplement in the treatment of PTSD, focusing on decreasing the frequency of posttraumatic nightmares. As posttraumatic nightmares are a nocturnal replay of the original traumatic event, the patient is reminded of the trauma every time they dream about it. A reduction in the frequency of post traumatic nightmares would lead to an abatement of fear and anxiety due to decreased instances of number of occurrences in a given time frame. In addition to this, as anticipated by Spoormaker (2008) and Spoormaker and Montgomery (2008), posttraumatic nightmares not only enhance but also prolong the severity of PTSD. As such, LDT could work as a supplement to already existing treatment of PTSD and reduce nightmare frequency. Furthermore, LDT offers the patien t the opportunity to alter the content of the dream to a less fearsome dream, which could lead to reducing the feelings of fear and anxiety within the dream. If LDT is effective in both reducing nightmare frequency and the intense feelings of fear and anxiety, it might also be effective in decreasing the fear and anxiety associated with the original trauma during wakefulness, which in turn could lead to a reduction in general PTSD symptom severity. While this possible effectiveness of LDT on PTSD was proposed by Green and McCreery (1994) in the early days of LD research and recently by Gavie and Revonsuo (2010), there has only been one study where researchers attempted to treat PTSD patients with LDT (Spoormaker van den Bout, 2006). They found that nightmare frequency was significantly reduced in subjects receiving LDT, but the study did not reveal any significant reduction in general PTSD symptom severity, which the authors proposed might have been due to the low baseline for PTSD symptom severity in the studied population. Moreover, the study only included one participant out of 23 who was actually diagnosed with PTSD (Spoormaker van den Bout). Gavie and Revonsuo were adamant that no conclusions can be made based on this single study and encouraged future researchers to investigate the effect of LDT on PTSD nightmares and other PTSD symptoms with larger groups of diagnosed PTSD patients and longer lucidity interventions. Fear and Control: Two Key Components for LDT Fear is a main component of nightmares, experienced both during sleep in relation to the nightmare content and during wakefulness, as suffering from recurrent nightmares can lead to fear of going to sleep due to the risk of re-experiencing the nightmare. Fear also represents one of the key emotions during the course of PTSD (APA, 2000). In PTSD, fear is not only related to the extreme fright which was experienced during the occurrence of the traumatizing event, but also refers to the massive feeling of fear evoked when the patient encounters associable stimuli, which often serve as reminders of the original event. Posttraumatic nightmares generally replicate the original event, meaning every time the nightmare occurs, the patient re-lives the trauma and its accompanied fear (Gavie Revonsuo, 2010). Although LDT has been shown to be effective in reducing recurrent nightmares, not all participants succeeded in becoming lucid and able to lucidly alter the content of the nightmare. This has been suggested do be due to the fact that the feeling of control, following from the mere knowledge of the possibility to master the nightmare, is equally as important as the actual altering of the content (Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997). As such, control might constitute a key component of LDT, both in respect to lucidly to control the content of the nightmare and alter the course of the dream, and to the feeling of control brought by the thought that the fear both during the dream and during wakefulness is something that can be overcome. In this sense, LDT might prove to be effective not only for patients suffering from nightmares and reducing nightmare frequency, but also for patients suffering from disorders characterised by fear, offering them th e possibility to control and reduce the level of fear they experience. In one case study, a 35-year-old woman diagnosed with Borderline Personality Disorder (BPD) and major depression complained about frequent nightmares. She suffered from one to four nightmares per week, from which her self-confidence and security felt threatened. She did not suffer from recurrent nightmares, but her nightmares did contain a recurrent theme, relating to the physical and mental abuse she experienced by her father as a child, and husband as an adult. These nightmares were so intense that she had difficulties in separating her experiences in them from her experiences in reality, and sometimes spoke of them as if they were real events (Brylowski, 1990). The patient was introduced to the phenomenon of LD and was instructed to keep a dream journal, which she was to take with her to therapy each week. She was also told to practice an LD induction technique every night in order to learn how to become lucid during the dream. The appearance of her father or husband in the nightmare was chosen as a dream cue, used as an indicator to remind her that she was just dreaming. Upon recognising that she was dreaming, she was to use the realisation as a reminder that she was safely lying in bed and there was nothing to fear (Brylowski, 1990). During a six-month period, which included 24 sessions with her therapist, the patient experienced three lucid dreams and was able to alter the course of the nightmare in all three cases. Using LDT resulted in reduced nightmare frequency, intensity and distress, which provided her with a sense of mastery in relation to her emotions and responses to nightmares. Following these results, her therapist suggested that these abilities and attitudes could be used in waking life when dealing with similar problems. So, whenever she was faced with a stirred emotion or a difficult situation in waking life, she was able to remind herself of how she had controlled a similar situation in the dream state. In turn, she now had the capacity to deal with the waking situation just as she had while (lucid) dreaming (Brylowski, 1990). As a result, LDT provided her with a sense of mastery in relation to her emotions and responses to nightmares as well as her waking life, which then resulted in entering into psychotherapy. What Green and McCreery (1994) put forward, is that LD provides us with the experience of achieving control over a mental aspect, in this case distressing nightmares. They argued that gaining control over one might, in turn, have a generalised therapeutic effect. In the case study, Brylowski (1990) showed how LDT not only reduced nightmare frequency and distress, but also how engaging in LDT could extend into managing situations waking life. LDT provided the patient with the experience of mastering a fearful situation within a nightmare, which, prior to the treatment, had affected her to the point where she could not differentiate nightmares from waking events. After the treatment the patient expressed increased self-confidence, knowing that she now possessed the capacity to make changes in other waking circumstances as well. Brylowski (1990) initated the notion that, Nightmares can occur across diagnostic syndromes. According to DSM-IV-TR, nightmares can occur frequently during the course of many psychological disorders without there being a specific diagnostic symptom, for example as a part of Personality Disorders, Anxiety Disorders, Mood Disorders and Schizophrenia (APA, 2000). Brylowski concluded lucid dreaming worked well for this patient as it motivated her to start and stay in therapy. He suggested that LD as a therapeutic tool ought to be considered not only for treating nightmares, but also in the treatment of personality disorders. Although diagnosed with BPD, the patient also showed symptoms related to PTSD, i.e. nightmares which directly or symbolically represented a traumatic event (history of abuse) and depression which, according to DSM-IV-TR, is highly associated with PTSD (APA, 2000). On the basis of this fact alone, it is premature to draw any conclusions on the effect of LDT on personality disorders. However, engaging in LDT did have a general therapeutic effect in this case study, and as such, LDT could be valuable as a supplement in the treatment of BPD and possibly even other personality disorders. Overall, more studies are needed to further investigate the possible general therapeutic value of gaining control over fear and anxiety using LDT, both in relation to recurrent nightmares, and to other psychological disorders such as PTSD and personality disorders. The current studies investigating the potential therapeutic value of LD in reducing recurrent nightmares have shown promising results, where engaging in Lucid Dreaming Treatment (LDT) has resulted in decreased nightmare frequency (Abramovitch, 1995; Brylowski, 1990; Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997), slightly increased subjective sleep quality (Spoormaker et al.) and reduced nightmare intensity and distress (Brylowski). As such, it has been suggested across these studies that LDT might be effective in reducing posttraumatic nightmares in PTSD (Gavie Revonsuo, 2010; Green McCreery, 1994). Every time a nightmare occurs, the patient experiences the trauma and extreme fear associated with it. Therefore, there is the possibility that relieving the posttraumatic nightmare could, in turn, reduce general PTSD symptom severity (Gavie Revonsuo). With larger groups of diagnosed PTSD patients and longer lucidity interventions, future research could st udy the effect of LDT on posttraumatic nightmares. As examined, one case study showed that attitudes and skills learned through LDT can be transferred and applied to waking life situations (Brylowski, 1990). This could be an indication that LDT has the potential to work beyond the more specific focus of alleviating nightmares. Although nightmare frequency was reduced, not all of the patients were able to reach lucidity and alter the course of events in their nightmare (Spoormaker van den Bout, 2006; Spoormaker et al., 2003; Zadra Pihl, 1997). On the basis of this, one possible and important key component of LDT could be that of control. In the case of Phobic patients, they were found to be less likely to believe in having control over events (Leung Heimberg, 1996). Considering lucid dreamers tend to believe in their own control over waking situations to a higher degree than non-lucid dreamers (Blagrove Hartnell, 2000; Blagrove Tucker, 1994), it shows that control could be one of the key elements of LDT and that LDT could be a va luable supplement in the treatment of phobia. Further and more extensive research is required in order to investigate the underlying functioning and other effects of LDT more deeply. There is also a gap in the research, where an opportunity exists to compare LDT to other cognitive-restructuring techniques, such as Imagery Rehearsal Therapy (IRT) and exposure therapy. In order to further explore the effect of LDT, longer LD induction technique practices and more intense lucidity interventions are needed for LDT to be applicable in the appropriate patient population. As seen in previous studies, there is the potential for this to help recurrent nightmare sufferers, PTSD and phobias, larger groups of nightmare sufferers, diagnosed PTSD and phobic patients. There is still untapped potential for the utilisation of LD as a therapeutic tool and supplement in the treatment of these symptoms, which needs to be studied in-depth.

Friday, January 17, 2020

Achieving Fault-Tolerance in Operating System Essay

Introduction Fault-tolerant computing is the art and science of building computing systems that continue to operate satisfactorily in the presence of faults. A fault-tolerant system may be able to tolerate one or more fault-types including – i) transient, intermittent or permanent hardware faults, ii) software and hardware design errors, iii) operator errors, or iv) externally induced upsets or physical damage. An extensive methodology has been developed in this field over the past thirty years, and a number of fault-tolerant machines have been developed – most dealing with random hardware faults, while a smaller number deal with software, design and operator faults to varying degrees. A large amount of supporting research has been reported. Fault tolerance and dependable systems research covers a wide spectrum of applications ranging across embedded real-time systems, commercial transaction systems, transportation systems, and military/space systems – to name a few. The supporting research includes system architecture, design techniques, coding theory, testing, validation, proof of correctness, modelling, software reliability, operating systems, parallel processing, and real-time processing. These areas often involve widely diverse core expertise ranging from formal logic, mathematics of stochastic modelling, graph theory, hardware design and software engineering. Recent developments include the adaptation of existing fault-tolerance techniques to RAID disks where information is striped across several disks to improve bandwidth and a redundant disk is used to hold encoded information so that data can be reconstructed if a disk fails. Another area is the use of application-based fault-tolerance techniques to detect errors in high performance parallel processors. Fault-tolerance techniques are expected to become increasingly important in deep sub-micron VLSI devices to combat increasing noise problems and improve yield by tolerating defects that are likely to occur on very large, complex chips. Fault-tolerant computing already plays a major role in process control, transportation, electronic commerce, space, communications and many other areas that impact our lives. Many of its next advances will occur when applied to new state-of-the-art systems such as massively parallel scalable computing, promising new unconventional architectures such as processor-in-memory or reconfigurable computing, mobile computing, and the other exciting new things that lie around the corner. Basic Concepts Hardware Fault-Tolerance – The majority of fault-tolerant designs have been directed toward building computers that automatically recover from random faults occurring in hardware components. The techniques employed to do this generally involve partitioning a computing system into modules that act as fault-containment regions. Each module is backed up with protective redundancy so that, if the module fails, others can assume its function. Special mechanisms are added to detect errors and implement recovery. Two general approaches to hardware fault recovery have been used: 1) fault masking, and 2) dynamic recovery. Fault masking is a structural redundancy technique that completely masks faults within a set of redundant modules. A number of identical modules execute the same functions, and their outputs are voted to remove errors created by a faulty module. Triple modular redundancy (TMR) is a commonly used form of fault masking in which the circuitry is triplicated and voted. The voting circuitry can also be triplicated so that individual voter failures can also be corrected by the voting process. A TMR system fails whenever two modules in a redundant triplet create errors so that the vote is no longer valid. Hybrid redundancy is an extension of TMR in which the triplicated modules are backed up with additional spares, which are used to replace faulty modules -allowing more faults to be tolerated. Voted systems require more than three times as much hardware as non-redundant systems, but they have the advantage that computations can continue without interruption when a fault occurs, allowing existing operating systems to be used. Dynamic recovery is required when only one copy of a computation is running at a time (or in some cases two unchecked copies), and it involves automated self-repair. As in fault masking, the computing system is partitioned into modules backed up by spares as protective redundancy. In the case of dynamic recovery however, special mechanisms are required to detect faults in the modules, switch out a faulty module, switch in a spare, and instigate those software actions (rollback, initialization, retry, and restart) necessary to restore and continue the computation. In single computers special hardware is required along with software to do this, while in multicomputers the function is often managed by the other processors. Dynamic recovery is generally more hardware-efficient than voted systems, and it is therefore the approach of choice in resource-constrained (e.g., low-power) systems, and especially in high performance scalable systems in which the amount of hardware resources devoted to active computing must be maximized. Its disadvantage is that computational delays occur during fault recovery, fault coverage is often lower, and specialized operating systems may be required. Software Fault-Tolerance – Efforts to attain software that can tolerate software design faults (programming errors) have made use of static and dynamic redundancy approaches similar to those used for hardware faults. One such approach, N-version programming, uses static redundancy in the form of independently written programs (versions) that perform the same functions, and their outputs are voted at special checkpoints. Here, of course, the data being voted may not be exactly the same, and a criterion must be used to identify and reject faulty versions and to determine a consistent value (through inexact voting) that all good versions can use. An alternative dynamic approach is based on the concept of recovery blocks. Programs are partitioned into blocks and acceptance tests are executed after each block. If an acceptance test fails, a redundant code block is executed. An approach called design diversity combines hardware and software fault-tolerance by implementing a fault-tolerant computer system using different hardware and software in redundant channels. Each channel is designed to provide the same function, and a method is provided to identify if one channel deviates unacceptably from the others. The goal is to tolerate both hardware and software design faults. This is a very expensive technique, but it is used in very critical aircraft control applications. The key technologies that make software fault-tolerant Software involves a system’s conceptual model, which is easier than a physical model to engineer to test for things that violate basic concepts. To the extent that a software system can evaluate its own performance and correctness, it can be made fault-tolerant—or at least error aware; to the extent that a software system can check its responses before activating any physical components, a mechanism for improving error detection, fault tolerance, and safety exists. We can use three key technologies—design diversity, checkpointing, and exception handling—for software fault tolerance, depending on whether the current task should be continued or can be lost while avoiding error propagation (ensuring error containment and thus avoiding total system failure). Tolerating solid software faults for task continuity requires diversity, while checkpointing tolerates soft software faults for task continuity. Exception handling avoids system failure at the expense of current task loss. Runtime failure detection is often accomplished through an acceptance test or comparison of results from a combination of â€Å"different† but functionally equivalent system alternates, components, versions, or variants. However, other techniques— ranging from mathematical consistency checking to error coding to data diversity—are also useful. There are many options for effective system recovery after a problem has been detected. They range from complete rejuvenation (for example, stopping with a full data and software reload and then restarting) to dynamic forward error correction to partial state rollback and restart. The relationship between software fault tolerance and software safety Both require good error detection, but the response to errors is what differentiates the two approaches. Fault tolerance implies that the software system can recover from —or in some way tolerate—the error and continue correct operation. Safety implies that the system either continues correct operation or fails in a safe manner. A safe failure is an inability to tolerate the fault. So, we can have low fault tolerance and high safety by safely shutting down a system in response to every detected error. It is certainly not a simple relationship. Software fault tolerance is related to reliability, and a system can certainly be reliable and unsafe or unreliable and safe as well as the more usual combinations. Safety is intimately associated with the system’s capacity to do harm. Fault tolerance is a very different property. Fault tolerance is—together with fault prevention, fault removal, and fault forecasting— a means for ensuring that the system function is implemented so that the dependability attributes, which include safety and availability, satisfy the users’ expectations and requirements. Safety involves the notion of controlled failures: if the system fails, the failure should have no catastrophic consequence—that is, the system should be fail-safe. Controlling failures always include some forms of fault tolerance—from error detection and halting to complete system recovery after component failure. The system function and environment dictate, through the requirements in terms of service continuity, the extent of fault tolerance required. You can have a safe system that has little fault tolerance in it. When the system specifications properly and adequately define safety, then a well-designed fault-tolerant system will also be safe. However, you can also have a system that is highly fault tolerant but that can fail in an unsafe way. Hence, fault tolerance and safety are not synonymous. Safety is concerned with failures (of any nature) that can harm the user; fault tolerance is primarily concerned with runtime prevention of failures in any shape or form (including prevention of safety critical failures). A fault-tolerant and safe system will minimize overall failures and ensure that when a failure occurs, it is a safe failure. Several standards for safety-critical applications recommend fault tolerance—for hardware as well as for software. For example, the IEC 61508 standard (which is generic and application sector independent) recommends among other techniques: â€Å"failure assertion programming, safety bag technique, diverse programming, backward and forward recovery.† Also, the Defense standard (MOD 00-55), the avionics standard (DO-178B), and the standard for space projects (ECSS-Q-40- A) list design diversity as possible means for improving safety. Usually, the requirement is not so much for fault tolerance (by itself) as it is for high availability, reliability, and safety. Hence, IEEE, FAA, FCC, DOE, and other standards and regulations appropriate for reliable computer-based systems apply. We can achieve high availability, reliability, and safety in different ways. They involve a proper reliable and safe design, proper safeguards, and proper implementation. Fault tolerance is just one of the techniques that assure that a system’s quality of service (in a broader sense) meets user needs (such as high safety). History The SAPO computer built in Prague, Czechoslovakia was probably the first fault-tolerant computer. It was built in 1950–1954 under the supervision of A. Svoboda, using relays and a magnetic drum memory. The processor used triplication and voting (TMR), and the memory implemented error detection with automatic retries when an error was detected. A second machine developed by the same group (EPOS) also contained comprehensive fault-tolerance features. The fault-tolerant features of these machines were motivated by the local unavailability of reliable components and a high probability of reprisals by the ruling authorities should the machine fail. Over the past 30 years, a number of fault-tolerant computers have been developed that fall into three general types: (1) long-life, un-maintainable computers, (2) ultra dependable, real-time computers, and (3) high-availability computers. Long-Life, Unmaintained Computers Applications such as spacecraft require computers to operate for long periods of time without external repair. Typical requirements are a probability of 95% that the computer will operate correctly for 5–10 years. Machines of this type must use hardware in a very efficient fashion, and they are typically constrained to low power, weight, and volume. Therefore, it is not surprising that NASA was an early sponsor of fault-tolerant computing. In the 1960s, the first fault-tolerant machine to be developed and flown was the on-board computer for the Orbiting Astronomical Observatory (OAO), which used fault masking at the component (transistor) level. The JPL Self-Testing-and-Repairing (STAR) computer was the next fault-tolerant computer, developed by NASA in the late 1960s for a 10-year mission to the outer planets. The STAR computer, designed under the leadership of A. Avizienis was the first computer to employ dynamic recovery throughout its design. Various modules of the computer were instrumented to detect internal faults and signal fault conditions to a special test and repair processor that effected reconfiguration and recovery. An experimental version of the STAR was implemented in the laboratory and its fault tolerance properties were verified by experimental testing. Perhaps the most successful long-life space application has been the JPL-Voyager computers that have now operated in space for 20 years. This system used dynamic redundancy in which pairs of redundant computers checked each-other by exchanging messages, and if a computer failed, its partner could take over the computations. This type of design has been used on several subsequent spacecraft. Ultra-dependable Real-Time Computers These are computers for which an error or delay can prove to be catastrophic. They are designed for applications such as control of aircraft, mass transportation systems, and nuclear power plants. The applications justify massive investments in redundant hardware, software, and testing. One of the first operational machines of this type was the Saturn V guidance computer, developed in the 1960s. It contained a TMR processor and duplicated memories (each using internal error detection). Processor errors were masked by voting, and a memory error was circumvented by reading from the other memory. The next machine of this type was the Space Shuttle computer. It was a rather ad-hoc design that used four computers that executed the same programs and were voted. A fifth, non-redundant computer was included with different programs in case a software error was encountered. During the 1970s, two influential fault-tolerant machines were developed by NASA for fuel-efficient aircraft that require continuous computer control in flight. They were designed to meet the most stringent reliability requirements of any computer to that time. Both machines employed hybrid redundancy. The first, designated Software Implemented Fault Tolerance (SIFT), was developed by SRI International. It used off-the-shelf computers and achieved voting and reconfiguration primarily through software. The second machine, the Fault-Tolerant Multiprocessor (FTMP), developed by the C. S. Draper Laboratory, used specialized hardware to effect error and fault recovery. A commercial company, August Systems, was a spin-off from the SIFT program. It has developed a TMR system intended for process control applications. The FTMP has evolved into the Fault-Tolerant Processor (FTP), used by Draper in several applications and the Fault-Tolerant Parallel processor (FTPP) – a parallel processor that allows processes to run in a single machine or in duplex, tripled or quadrupled groups of processors. This highly innovative design is fully Byzantine resilient and allows multiple groups of redundant processors to be interconnected to form scalable systems. The new generation of fly-by-wire aircraft exhibits a very high degree of fault-tolerance in their real-time flight control computers. For example the Airbus Airliners use redundant channels with different processors and diverse software to protect against design errors as well as hardware faults. Other areas where fault-tolerance is being used include control of public transportation systems and the distributed computer systems now being incorporated in automobiles. High-Availability Computers Many applications require very high availability but can tolerate an occasional error or very short delays (on the order of a few seconds), while error recovery is taking place. Hardware designs for these systems are often considerably less expensive than those used for ultra-dependable real-time computers. Computers of this type often use duplex designs. Example applications are telephone switching and transaction processing. The most widely used fault-tolerant computer systems developed during the 1960s were in electronic switching systems (ESS) that are used in telephone switching offices throughout the country. The first of these AT&T machines, No. 1 ESS, had a goal of no more than two hours downtime in 40 years. The computers are duplicated, to detect errors, with some dedicated hardware and extensive software used to identify faults and effect replacement. These machines have since evolved over several generations to No. 5 ESS which uses a distributed system controlled by the 3B20D fault tolerant computer. The largest commercial success in fault-tolerant computing has been in the area of transaction processing for banks, airline reservations, etc. Tandem Computers, Inc. was the first major producer and is the current leader in this market. The design approach is a distributed system using a sophisticated form of duplication. For each running process, there is a backup process running on a different computer. The primary process is responsible for checkpointing its state to duplex disks. If it should fail, the backup process can restart from the last checkpoint. Stratus Computer has become another major producer of fault-tolerant machines for high-availability applications. Their approach uses duplex self-checking computers where each computer of a duplex pair is itself internally duplicated and compared to provide high-coverage concurrent error detection. The duplex pair of self-checking computers is run synchronously so that if one fails, the other can continue the computations without delay. Finally, the venerable IBM mainframe series, which evolved from S360, has always used extensive fault-tolerance techniques of internal checking, instruction retries and automatic switching of redundant units to provide very high availability. The newest CMOS-VLSI version, G4, uses coding on registers and on-chip duplication for error detection and it contains redundant processors, memories, I/O modules and power supplies to recover from hardware faults – providing very high levels of dependability. The server market represents a new and rapidly growing market for fault-tolerant machines driven by the growth of the Internet and local networks and their needs for uninterrupted service. Many major server manufacturers offer systems that contain redundant processors, disks and power supplies, and automatically switch to backups if a failure is detected. Examples are SUN’s ft-SPARC and the HP/Stratus Continuum 400. Other vendors are working on fault-tolerant cluster technology, where other machines in a network can take over the tasks of a failed machine. An example is the Microsoft MSCS technology. Information on fault-tolerant servers can readily be found in the various manufacturers’ web pages. Conclusion Fault-tolerance is achieved by applying a set of analysis and design techniques to create systems with dramatically improved dependability. As new technologies are developed and new applications arise, new fault-tolerance approaches are also needed. In the early days of fault-tolerant computing, it was possible to craft specific hardware and software solutions from the ground up, but now chips contain complex, highly-integrated functions, and hardware and software must be crafted to meet a variety of standards to be economically viable. Thus a great deal of current research focuses on implementing fault tolerance using COTS (Commercial-Off-The-Shelf) technology. References Avizienis, A., et al., (Ed.). (1987):Dependable Computing and Fault-Tolerant Systems Vol. 1: The Evolution of Fault-Tolerant Computing, Vienna: Springer-Verlag. (Though somewhat dated, the best historical reference available.) Harper, R., Lala, J. and Deyst, J. (1988): â€Å"Fault-Tolerant Parallel Processor Architectural Overview,† Proc of the 18st International Symposium on Fault-Tolerant Computing FTCS-18, Tokyo, June 1988. (FTPP) 1990. Computer (Special Issue on Fault-Tolerant Computing) 23, 7 (July). Lala, J., et. al., (1991): The Draper Approach to Ultra Reliable Real-Time Systems, Computer, May 1991. Jewett, D., A (1991): Fault-Tolerant Unix Platform, Proc of the 21st International Symposium on Fault-Tolerant Computing FTCS-21, Montreal, June 1991 (Tandem Computers) Webber, S, and Jeirne, J.(1991): The Stratus Architecture, Proc of the 21st International Symposium on Fault-Tolerant Computing FTCS-21, Montreal, June 1991. Briere, D., and Traverse, P. (1993): AIRBUS A320/ A330/A340 Electrical Flight Controls: A Family of Fault-Tolerant Systems, Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Sanders, W., and Obal, W. D. II, (1993): Dependability Evaluation using UltraSAN, Software Demonstration in Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Beounes, C., et. al. (1993): SURF-2: A Program For Dependability Evaluation Of Complex Hardware And Software Systems, Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Blum, A., et. al., Modeling and Analysis of System Dependability Using the System Availability Estimator, Proc of the 24th International Symposium on Fault-Tolerant Computing, FTCS-24, Austin TX, June 1994. (SAVE) Lala, J.H. Harper, R.E. (1994): Architectural Principles for Safety-Critical Real-Time Applications, Proc. IEEE, V82 n1, Jan 1994, pp25-40. Jenn, E. , Arlat, J. Rimen, M., Ohlsson, J. and Karlsson, J. (1994): Fault injection into VHDL models:the MEFISTO tool, Proc. Of the 24th Annual International Symposium on Fault-Tolerant Computing (FTCS-24), Austin, Texas, June 1994. Siewiorek, D., ed., (1995): Fault-Tolerant Computing Highlights from 25 Years, Special Volume of the 25th International Symposium on Fault-Tolerant Computing FTCS-25, Pasadena, CA, June 1995. (Papers selected as especially significant in the first 25 years of Fault-Tolerant Computing.) Baker, W.E, Horst, R.W., Sonnier, D.P., and W.J. Watson, (1995): A Flexible ServerNet-Based Fault-Tolerant Architecture, Pr oc of the 25th International Symposium on Fault-Tolerant Computing FTCS-25, Pasadena, CA, June 1995. (Tandem) Timothy, K. Tsai and Ravishankar K. Iyer, (1996): â€Å"An Approach Towards Benchmarking of Fault-Tolerant Commercial Systems,† Proc. 26th Symposium on Fault-Tolerant Computing FTCS-26, Sendai, Japan, June 1996. (FTAPE) Kropp Nathan P., Philip J. Koopman, Daniel P. Siewiorek(1998):, Automated Robustness Testing of Off-the-Shelf Software Components, Proc of the 28th International Symposium on Fault-Tolerant Computing , FTCS’28, Munich, June, 1998. (Ballista). Spainhower, l., and T.A.Gregg, (1998):G4: A Fault-Tolerant CMOS Mainframe Proc of the 28th International Symposium on Fault-Tolerant Computing FTCS-28, Munich, June 1998. (IBM). Kozyrakis, Christoforos E., and David Patterson, A New Direction for Computer Architecture Research, Computer, Vol. 31, No. 11, November 1998.

Thursday, January 9, 2020

Is It Wrong For Pay For Sex - 844 Words

Is It Wrong To Pay For Sex? The video, Is It Wrong To Pay For Sex, is an hour and a half debate which focuses on the morals and ethics behind paying for sex. In the debate, three experts argued in favor of the motion and three argued against the motion. Prior to the debate, the audience voted 20 percent in favor of the motion while 50 percent voted against it, with 30 percent undecided. However, by the end of the debate, 45 percent voted in favor of the proposition, while 46 percent voted against it, and 9 percent were undecided. Interestly enough, when divided and tallied separately, the men voted 27 percent in favor of the motion, and 66 percent voted against it compared to the 58 percent of the women who were in favor of the motion and 34 percent against the motion of paying for sex. Our experts for the motion were Melissa Farley, a clinical and research psychologist, Catharine MacKinnon, who specializes in sex equality and last but not least, Wendy Shalit of Williams College. Our experts against th e motion were Sydney Biddle Barrows, infamously known to millions as the â€Å"Mayflower Madam†, Tyler Cowen, an economic professor, and Lionel Tiger, an anthropology professor. On the pro side of the debate, we have Wendy, who argues that human beings shouldn’t be used as a mean to your ends. According to Wendy, sex isn’t as casual as paying for a hamburger. Paying for sex is different because it â€Å"teaches on the deepest and most personal aspects of ourselves†. ByShow MoreRelatedProstitution Essay769 Words   |  4 PagesProstitution is a underground sex trade where women sell their bodies to men or women who pay them. The sex trade or should I say â€Å"prostitution’’is not only wrong but also very dangerous. There are people who think it’s a great way to be employed or see it as another way through life. There are 42 million prostitutes working in the world that have been recorded and 1 million men,women,and children prostitutes working in the U.S.these so called prostitutes work all together in a sex trade meaning they haveRead MoreShould All Schools Adopt More Sex Education?1748 Words   |  7 Pagesadopt more sex education classes in schools ? Schools are reconsidering of adopting more sex educational classes in all schools mostly in high schools because kids mostly in high schools are making wrong decisions and being sexually active without the right knowledge about situation so that is why more schools are trying to have these type of classes. People believe more sex education classes can decrease sexual risk like teen pregnancy and reduce HIV/AIDS ( â€Å"Pro and Cons of Sex EducationRead MoreWorking Like an Elephant Eating Like an Ant Essay1231 Words   |  5 Pagesthere can never be sin. In the same scale where there are two or more witnesses the truth shall be established, Human rights describe equal rights and freedom for everybody by the fact of being human and without distinction of any kind of race, color, sex, language, religion, political or other opinions. However, many people have always suffered from the lack of them throughout history. In fact, the lack of human rights has a lot of effects on people lives. In Up Against Wal-Mart, Karen Olsson describesRead MoreHuman Trafficking : Right Or Wrong? Essay1269 Words   |  6 PagesHuman Trafficking: Right or Wrong? The growing global human trafficking industry is valued at $31,600,000,000 per year, which makes it the second fastest criminal industry in the world. The topic of human trafficking is one that is not taken lightly anywhere in the world. It has been an issue for ages. Human trafficking can take on many forms within age, gender, or race. Human trafficking is the equivalence to modern day slavery and needs to be recognized as such by everyone if this serious problemRead MoreProstitution Should Be Legalized Within The United States1602 Words   |  7 Pagesmany people do it. The sex market is so wide no one has to worry about how they look or whatever they dislike about themselves. There will always be someone out there, despite having to pay for it. Most prostitutes find themselves working of their own accord, we should respect that as their choice in life. Most prostitutes treat their situation as if they were no different than a freelancer and that is fairly true. It can be argued that sex is crucial to a healthy adult life. Sex helps relieve stressRead MoreThe Differences Between Male And Female844 Words   |  4 PagesSex refers to the differences between male and female through their biological and physiological anatomy as internal and external sex organs, chromosomes (i.e. XX for female; XY for male, Seccombe p.93); and hormonal profiles (i.e. estrogen for female; testosterone for male). Gender refers to the differences between male and female attributes, traits, behaviours, activities, culturally and socially constructed roles that a culture or society expects that outlines as feminine and masculine. FamilyRead MoreThe Fight For Equal Rights For Same Sex Marriages Across The United States1393 Words   |  6 PagesThe Fight for Equal Rights for Same-Sex Marriages across the United States Having one loving parent is good. Having two loving parents is great. Having a mother and a father is traditional. Having two mothers or two fathers is wrong. This is what we are told to believe but as we learn from our past and grow as a society, we start to look for positive change in which our values are challenged and the truth becomes clear. It is not right to take away the basic rights of a person because ofRead MoreMusic Is The Most Influential Music1490 Words   |  6 Pagesmeaning of the messages artists are sending us?. Hip Hop music is the most influential music there is in our society. Teaching young woman and man the wrong idea of what a love relationship means, by objectifying the woman and picturing the man erroneously. In the article MEE â€Å"This is my reality† they speak about the relationship that Hip Hop culture and sex have influent in the young minds of men and women. Also in the book â€Å"Pimps up, Ho’s Down† by T.Denean SharpleyWhiting. A Feminist women who wroteRead MoreWhy Marriage Equality Is Not At All A Harm Society Or The World912 Words   |  4 Pagesworld become more knowledgeable to that child. In my own life, the one issue that stuck with me is marriage equality. I believe everyone should have the right to marry whoever they feel their soul-mate is in the world. Even though some believe it is wrong, there are many reasons to prove marriage equality is not at all a harm to society or the world. To start, the definition of marriage (according to dictionary.com) is the state, condition, or relationship of being married; wedlock. No where in thatRead MoreThe Human Of Human Trafficking Essay1235 Words   |  5 PagesAlthough slavery was abolished in 1865, the practice of it is still very alive today. Human trafficking, a form of modern slavery, is the buying and selling of people, whether it s for forced labor or commercial sex. Every year, thousands of adults and children, especially girls, are forced into the endless trafficking ring. â€Å"The International Labour Organization estimates that there are 20.9 million victims of human trafficking globally† (â€Å"The Facts†). The human trafficking industry is a worldwide