Friday, May 29, 2020

The grid computing - Free Essay Example

Computational Grids combine heterogeneous, distributed resources across geographical and organisational boundaries. Grids may be formed to provide computational power for CPU-intensive simulation, high throughput computing for analysing many small tasks or for data intensive tasks such as those required by the LHC Experiments. For all of these situations the challenges are the same: how to enable dynamic access to these resources as securely, reliably and as efficiently as possible, without central control and omniscience. The following chapter discuss the main concepts and components that combine to make computational Grids possible. Introduction There are four main characteristics that distinguish a Grid from other common distributed systems [REF]: Heterogeneous: The resources of a Grid may be provided by multiple organisations, which are geographically distributed and execute local and autonomy resource utilisation policies. These resources are heterogeneous in service ability and the style of utilisation. Extendable: A Grid is able to grow from a small amount resource set into a huge global infrastructure. The service ability of a Grid is scaleable to cope with the resource demands of various applications. Dynamic: Since the resources and the services are contributed by multiple organisations and each organisation has its local resource management and utilisation policy, these resources and services are constantly dynamic available and shared subject to the autonomy of resources contributors and their utilisation behaviours. High Communication Latency: Since a Grid may be constructed and distributed across a wide geographical distance, the communication latencies involved are likely higher than on any other localised system. A Grid, according to Ian Foster [REF]: Coordinates resources that are not subject to centralised control using standard, open, general purpose protocols and interfaces to deliver non-trivial Qualities of Service (QoS). These three points can be used as a starting point for our discussion. Distributed computing is a well known problem with many potential solutions. CORBA, RMI and more recently Web Services have been developed to cope with the failures and lack of information and control that is inherent in a distributed system. But these systems are no longer enough to satisfy the needs of the scientific and, increasingly, the business community. By combining distributed resources into a single virtual resource, users are able to access far more computing power at lower cost and higher efficiency. The real cost is in the increased complexity of the system. Resources providers and consumers are dynamic and the information and control of the system is incomplete. Hardware and network failures, power cuts and human error together with different architectures, operating systems and middleware must all be handled. Without centralised control, networks of trust must be established. Collaborations create Virtual Organisations (VOs), which span traditional organisations and can be formed dynamically. Users and resources can then be authorised based on their membership of a particular VO. All of the above is only possible through the adoption of standard, open protocols and interfaces. The wide range of hardware and software available on a Grid means that the only hope for interoperability is that an application written for one middleware platform can speak the same language as another. The adoption of a single security infrastructure, based upon the Public Key Infrastructure (PKI), is a good example of this. When every interface supports the same authentication method, users are able to use resources based solely on the ownership of their credentials (and potentially membership of a VO). The delivery of non-trivial Quality of Service (QoS) provides the motivation to overcome all of these hurdles. As network speeds have increased, it has become feasible to harness massive amounts of computing power across multiple domains utilising resources that might otherwise be idle. Architecture The Grid is typically composed of layers with higher layers making use of the functionality provided by lower layers. This is also referred to as the hourglass model[REF], where the neck defines a limited number of key protocols, which can be used by a large number of applications, to access a large number of resources. The key layers that are required in a typical Grid are shown in Figure 3.1 and are discussed in the following sections. Fabric Computational resources, high performance networks, storage devices and scientific instruments all combine to form the underlying fabric which form a Grid. The fabric layer provides the resource specific implementations of operations that will be required by the resource layer. Computational resources take the form of the CPUs upon which the work is performed. Typically existing clusters of centralised, homogeneous computers are attached to the Grid, so that any user on the Grid can use the resources as if they were local to the site. Differences in processor architecture, 32 or 64 bit and operating system need to be accounted for and hidden from the user. A global Workload Management System (WMS) is typically used to communicate with site local Computing Elements (CE). These CEs submit jobs to the Worker Nodes (WN) that form the cluster via the existing Local Resource Management System (LRMS), such as LSF, PBS or Sun Grid Engine. The emergence of high-speed, optical networking in particular can be seen as one of the key driving forces without which these distributed, data intensive activities would be impossible. It is now common to find 1 Gb/s links, with 10 Gb/s links becoming increasingly available between the larger centres. These clusters will typically have large Mass Storage Systems for secure, reliable storage of large amounts of data. The underlying technology of this Storage Element (SE) may again be different from site to site and these differences need to be accounted for. Finally, the instruments from which raw data is obtained must be attached to the Grid. This may be a telescope, a microscope or, for the case of the LHC experiments, a particle detector. These detectors will produce upwards of a petabyte of data per year which must be processed, analysed and stored on the Grid so that any member of the collaboration at any point around the world can obtain access to the data. Connectivity The connectivity layer glues the Grid fabric resources by providing the core communication (e. g. transport, routing and naming) and security (e. g. authentication) protocols to support the information exchange between Grid resources in the fabric layer. These protocols defined in connectivity layer make communication among Grid resources easy and secure. In order to support transparent access to resources, single sign-on is required. Without this users would be required to authenticate before using each resource that is required in the workflow. Considering that this could encompass hundreds of resources across distinct administrative domains, it is clearly unacceptable that users should have to obtain and access a local account to use the facilities. Resource The resource layer defines the information protocols for inquiring the state of a Grid resource, and management protocols for negotiating access to a shared resource. Only the information of sharing an individual resource is concerned by the protocols in the resource layer. The resource layer is concerned with the provision of management and information protocols for individual resources. This forms the neck of the hourglass: a narrow range of protocols which hides the heterogeneity beneath from the rich applications above. Secure connections are established through the connectivity layer to the resources in the fabric layer. No knowledge of any global state is required. Collective The collective layer provides services that combine all of the resources represented by the resource layer into a single global image. Services providing accounting, information, monitoring, security and scheduling would operate at this level. Instead of submitting jobs to a single batch system this layer can orchestrate the execution of jobs across multiple systems. Monitoring and diagnostic information is available, to provide information about the state of the Grid as a whole. Security and policies can be applied at the community level, so that the VO managers can control who can access resources. Applications The application layer is the one that users of the Grid should interact with. Developers can use the services offered at the lower levels to compose applications that can take advantages of the resources within the Grid. The applications are able to utilise the implementations (e. g. APIs provided by a Grid middleware) of protocols defined within each lower layer. Standards Standards are essential to ensure the interoperability and reuse of components in such a large and complex system. The original body overseeing standards in Grid computing, the Global Grid Forum (GGF), merged with the Enterprise Grid Alliance to form the Open Grid Forum[69]. These bodies are modelled after (and inherit from) existing standards bodies that are involved in Web standardisation such as the Internet Engineering Task Force (IETF)[70] and the Organisation for the Advancement of Structured Information Standards (OASIS)[71]. The World Wide Web Consortium (W3C)[72] should also be mentioned as the body behind the standardisation of HTTP, SOAP and XML technologies which much of Grid Computing rely on. We will discuss two of the most relevant here. OGSA The first standard to be proposed was the Open Grid Services Architecture[73, 74] (OGSA) in 2002. OGSA defines standard protocols and interfaces to manage resources as part of a Service Orientated Architecture (SOA). The aim is to promote interoperability and enable reuse and composition of services, by providing low level functionality that is common to many services. OGSA extends the existing Web Services framework to provide functionality, such as discovery, creation, destruction and notification, which is required in a Grid Service. Web Services are typically persistent and stateless something that may not be appropriate for a Grid Service. For example, imagine a service that reports on the status of a job. The service needs some concept of a state and the user doesnt want every other user to have access to their results. For this reason Grid Services can be dynamic, transient and stateful. While OGSA defines the general architecture of a service based Grid, the Open Grid Services Infrastructure (OGSI) describes the plumbing that would make this architecture possible. In 2004 OGSI was superceded by the Web Services Resource Framework (WSRF) which addressed many of the issues in OGSI. WSRF The Web Services Resource Framework (WSRF)[75] has been designed to address the shortcomings of Web Services and the criticisms of OGSI; that it is too large and too different from traditional Web Services. The WSRF retains most of the functionality of OGSI, but it is repackaged and re-factored into a set of six complimentary standards more in line with existingWeb Service standards. In OGSI stateful service instances have service data, whereas in WSRF stateless services act upon stateful resources with certain properties. A WS-Resource is a named, typed element of service data, which is related to a specific Web Service. The WS-ResourceProperties specification defines methods for querying and updating these resources with the WS-ResourceLifetime specification detailing how the persistence of the service can be controlled. The remaining standards, WS-BaseFaults, WS-ServiceGroup, WS-BaseNotification and WS-BrokeredNotification, provide similar functionality to their OGSI counterparts. The draft standard was proposed by the Globus Alliance, IBM and HP in January 2004 and was standardised by OASIS. Globus Toolkit 4[22] is a WSRF compliant implementation along with WSRF.NET[76], WSRF::Lite[77] and Websphere[78]. Middleware In order to make the most of the resources provided in the Grid Fabric, a set of low level services, which perform commonly used operations, are required. This increases the security, performance and reliability of the Grid applications, while reducing the complexity for the developer. The four main areas of Grid Services are outlined in the following section, before a more detailed discussion of the middleware provided by different organisations. Resource Management Resources are the fundamental components of any Grid. Whether they are CPUs, storage, network or some form of scientific instrument, they need to be accessible across the Grid according to some policy. Resource management provides the applications and interfaces required to access and control these heterogeneous resources in a consistent manner. The most common requirement is the management of computing resources. The WMS accepts jobs from the user and allocates them to resources. A Resource Broker (RB) will typically be used to match the users requirements to an advertised resource. The resources themselves are typically accessed through some form of CE which provides a bridge between the global WMS and the local resources. It will accept and execute jobs on its local infrastructure and report the current status of the jobs to some form of logging system. The CPU nodes that the CE represents are referred to as Worker Nodes (WN) and which may be a cluster in its own right or a looser distribution of workstations. At every stage the global and local components will assess the credentials of the owner of the job, to ensure that they are eligible for access and to determine what, if any, priority they should be given. Information Services Grids are dynamic systems where data, resources and users are transient. Users and services lack knowledge of the status and availability of services and methods of discovery and monitoring are required. Information such as the system load and location of data can be used by the WMS to allocate jobs. The provision of accurate and timely monitoring information makes identifying and diagnosing problems across the system possible. Data Management All Grids regardless of their purpose require mechanisms for the discovery, storage and transfer of data. What differs between Grids is the scale of the data that must be managed and hence the performance that is required from the data management middleware. Users of the Grid will not be aware of the physical location of their data. File lookup services must be available to users and applications to provide the physical file name based on some logical file name or some meta-data. Data confidentiality, integrity and accessibility must be maintained at all times by maintaining replicas across the Grid and using reliable transfer and storage mechanisms. Data is typically represented as files. Access to the data contained within the files is outside of the scope of the middleware and is the responsibility of domain specific applications. Multiple replicas of a file may exist within the Grid for redundancy and/or efficiency reasons. To keep track of the files and their replicas a File Catalogue is required. Each file that is registered with the catalogue has a Globally Unique ID (GUID); a hexadecimal number which refers to one or more Storage URLs (SURL), which give the location of the file and its replicas. However, a GUID is not a user friendly way of referring to files. The File Catalogue can be used to assign a Logical Filename (LFN) to each GUID to aid comprehension. Metadata, that is data about data, can also be added so that files can be selected based on the selection of some attributes. These files are stored on a SE, which provides a logical abstraction of the underlying storage mechanism. A SE may be disk or tape based with a disk frontend. The Storage Resource Manager (SRM) protocol[79] ensures that there is a standard method of interacting with the data storage. Files are physically transferred to or from a SE using the GridFTP protocol[80]. This GGF standard defines extensions to the FTP protocol to enable high performance, secure and robust data transfers across high bandwidth distributed networks. Higher level services may be implemented, to automate the transfer of files and the interaction with the catalogues. Security Security in a Grid is paramount. By its very nature it exposes valuable resources and data across zones of administrative control. Authentication and authorisation of users and services and the integrity and confidentiality of the data they use is essential. This is complicated by the requirements of the individual sites that compose the Grid who want to restrict access to potentially valuable and sensitive resources and by the requirement that the Grid should support single sign-on. Users should not have to obtain an account on every machine they need to use (in many cases they do not know which machines they are using). The solution chosen is based upon PKI[81] and the concept of VOs. PKI allows for users and services to authenticate one another if they both trust a third party. Users have two keys: a public key, which is used to encrypt messages and a private key, which is used for decryption and protected by a pass phrase. Security is based on the difficulty of factorising the large prime numbers the public key is based upon to obtain the private key. The public key is presented to a service in the form of a digital certificate, which is signed by a mutually trusted third party, the Certificate Authority (CA). If the service trusts the CA, then it can trust that the person or service that presented the certificate is who they say they are. Authentication, confidentially and integrity are guaranteed without any exchange of the sensitive private key. However, this does not completely solve the problem. Jobs may have a long run time and need to re-authenticate, or the user may need to delegate responsibility for some action to another service. As the user does not want to expose their private key, another method must be used for authentication. A Grid Proxy is another digital certificate, with a new public and private key (stored on the filesystem so that only the user can read it) signed by the users original public key. These proxies are typically short lived to reduce the risk of exposure due to the lower level of security in the private key. These proxy certificates provide a chain of trust back to the original owner and to the issuing CA. Proxy renewal and delegation is allowed. Now that the user has been authenticated, they must be authorised to use the resources. Providing access rights on a case by case basis across the Grid, would create a huge, unmaintainable burden on site administrators. Instead users apply for membership of a Virtual Organisation: a group of users, organisations and resources that share a common aim. Membership of the VO entitles the user to use the resources at the VOs disposal and site administrators are free to allocate resources to a single entity. Delegation[82] allows users to transfer their credentials (or a subset of their credentials) to another service, which will operate on behalf of that user. This ensures that services can be granted the minimum privileges for their task and that every delegated credential is independent. The delegation process consists of several stages, see Figure 3.3. First the client that owns the certificate and the server which requires a delegated copy create a secure connection. The connection need not be encrypted, as no secrets are passed, but it must ensure integrity. The client then creates a new public and private key which are inserted into a certificate request and returned to the client. The client uses the proxies private key to sign the certificate request and the complete certificate is returned to the server, where it is stored with the new private key. Common Middleware The previous sections discussed the features that are required from any Grid. The following section discusses some of the most common implementations of these features. Globus Toolkit The Globus project[22] was formed in the late 1990s from the experience and software that was gained from the I-WAY project[21] in the United States. It is now one of the most well known providers of open-source Grid software. The Globus Toolkit (GT) has produced many of the fundamental standards and implementations that underly many of todays Grids. It is not intended to provide a complete implementation of a Grid, but rather to provide components which can be integrated as required. Version 2 of the toolkit released in 2002 provides non-WS C implementations of features such as GridFTP, which still form the basis of many Grids today. Version 3 was the first to introduce an OGSA-compliant Service Orientated Architecture, which was completed when GT4, the WSRF compliant version, was released in 2005. Service implementations are provided respecting the relevant standards where possible. Containers are provided for Java, Python and C which implement many of the standard requirements such as security, discovery and management within which other services can be developed. Workload management is performed by the Globus Resource Allocation and Management (GRAM) component. GRAM defines a protocol for the submission and management of jobs on remote computational resources. GRAM interacts with the LRMS (either LSF, PBS or Condor) which then executes the task. Data can be staged in and out from the WN. It is important to note that the GT does not provide any brokering functionality, where the most appropriate computational resource is chosen according to some requirements. Data is transferred using an implementation of the GridFTP protocol called GridFTP. The Replica Location Service (RLS) provides a File Catalogue which may be used in conjunction with the Reliable File Transfer (RFT) to manage third party GridFTP transfers and the interactions with the catalogues. Monitoring and discovery are provided by the Index, Trigger and WebMDS services[83]. The Index service collects information which is published by other services into a single location. The Trigger service can then be used to perform a defined action when some criteria in the index service is met. WebMDS provides a web based interface to information which is collected from either the index service or another service. The Globus Security Infrastructure (GSI)[68] is perhaps the most widely used component of the Globus Toolkit. It provides tools for the authorisation and authentication of users using a PKI. Rather than submit their valuable private key, users create a short lived proxy which is then used to authenticate with resources. When jobs arrive at a certain site GSI can map the user onto a local credential appropriate for that site. The MyProxy[84] credential store is also implemented. This provides a secure location to store long lived credentials which can then be retrieved by authorised services. This is required as users proxies often have a shorted lifetime than the jobs that they submit. Condor Condor[27] is a distributed batch computing system. Unlike other batch systems, such as LSF or PBS, Condors main focus is on high-throughput, opportunistic computing. Whereas in high performance computing the goal is to maximise the amount of work which can be performed per second, high throughput computing attempts to maximise the amount of work that can be performed over a longer time period. To enable this, all of the resources of an organisation must be used as effectively as possible. Instead of just using large dedicated clusters, Condor makes it possible to scavenge idle computing resources from all of the resources in an organisation from large clusters to individual desktops. Failures are handled transparently and jobs can be migrated from one machine to another if, for example, the user begins to use their desktop again or the machine crashes. Condor consists of three main components: agents, resources and matchmakers. Users submit jobs to agents, which find resources suitable for the jobs via a matchmaker. Machines may simultaneously run an agent and a resource server, so that it can both submit and accept jobs. Jobs and resources can specify requirements using the ClassAd syntax. Using these Classified Advertisements jobs can specify the attributes they wish their execution resource to have (memory, architecture, etc.) while resources can specify their configuration and the type of jobs they are willing to accept. When th agent accepts a job from the user, it publishes the requirements of the job to the matchmaker. The matchmaker then finds all of the resources where the requirements match and ranks them according to some criteria; processor speed for example. The agent and the resource are informed of the match and further verification and notification may take place until both parties are happy. Once a job has been matched to a resource, a shadow daemon is created on the submit machine to provide the input files, environment and executable required to complete the job to the sandbox daemon on the resource. The sandbox recreates the users environment, executes and monitors the execution of the job. It also protects the resource from malicious code by executing the job as a user with limited permissions within the resource. If the executable is linked with the Condor libraries, it can use the shadow to read and write files directly from the submission machine and to create checkpoints. With a checkpoint the entire state of the program is saved, so that in the event of the resource becoming unavailable for whatever reason, the job can be migrated to another resource and restarted from the checkpoint. Every community of agents and resources that is served by a matchmaker is referred to as a pool. Each pool will typically be administered by a separate department or institution. However, users are not limited to a single matchmaker. The flocking process allows agents to interact with multiple matchmakers across organisational boundaries provided that they have permission. The user can then utilise resources from multiple pools to complete their tasks. Condor-G is the combination of Condor and Globus, see Figure 3.6. Condor is used for local job management, while Globus is used to perform secure inter-domain communication. Condor-G communicates with a remote GRAM server which can then communicate with the LRMS. This could be another Condor pool, in which case the process is referred to as Condor-C. Even if the batch system is not Condor, the process of gliding in can be used to create a Condor pool. The first job that is submitted to the batch system starts the Condor servers which can then become part of the pool of the original user. Figure 3.7 shows Condor-C, which allows for the transfer of jobs in one agents queue to another agents queue. A single shadow server is started to monitor the jobs while the delegated agent performs the actual job submission. The agent that submits the jobs to the resource needs to have direct contact with the sandbox. Condor is used within several middleware projects including LCG which is discussed next. LCG The LHC Computing Grid (LCG) project was created to deploy and manage the infrastructure necessary to satisfy the LHC experiments requirements. To achieve this LCG combines middleware from multiple projects such as Globus, Condor and the European DataGrid[46] (EDG) project. EDG ran from 2001 to 2004 and created the base middleware that is required to operate a distributed computing infrastructure of the required scale not just for high energy physics but also for biological and earth science applications. The LCG project is also closely related to its successor, the Enabling Grids for E-sciencE (EGEE)[85] project, and shares many of the same components. As of July 2007 version 2.7.0 of the LCG middleware is still the version that is used in production and will be discussed first. A discussion of the differences with the EGEE middleware follows. Workload Management The EDG WMS[86] is the main interface between users and the Grid, see Figure 3.8. It accepts job submission requests from users, matches them to appropriate resources, submits them to that resource and monitors their execution. Using the Job Description Language (JDL)[87] users configure their job and specify any requirements that they may have. Job parameters such as the executable, arguments, environment and input data along with requirements on the resource, such as minimum memory or maximum run time are specified in a text file. When a job is submitted the WMS client contacts the Network Server and transfers the job description, the executable and any other files that are required for the job to the WMS server. The users proxy is also delegated to the WMS, so that it can operate on the users behalf. The job is then processed by the Workload Manager which orchestrates the matching and submission of the job. The RB1 performs matchmaking using the Condor Matchmaker[88] which matches the requirements of the job with the published resources. The Information Service is used to obtain information about the load on each CE so that resources can be matched with jobs most effectively. If there is a requirement on input data the RB can use the Data Location Interface (DLI) of a specified catalogue to determine the location of the SE containing that data. Once the job has been matched to a resource, the Job Adaptor makes any alterations to the job that are necessary before submission. The Job Controller performs the actual job submission using Condor-G to communicate with the CE. The CE provides a bridge between the global Grid and the local resources. The WMS system sends jobs to the CE using Condor-G. The Globus Gatekeeper authenticates the user that is submitting the job and translates the job so that it can be understood by the Job Manager. Local Center Authorisation Service (LCAS) and the Local Credential Mapping Service (LCMAPS) are used by the Gatekeeper to authorise the user and to map them to a local account at the site respectively. A Job Manager specific to th underlying LRMS performs the actual job submission and monitors the execution until completion. The logging and bookkeeping system is updated as the job progresses through the system. Data Management Each site will have at least one SE which is used to store large volumes of data. Each SE provides an SRM interface to the data. By providing a common interface to access data, the details of the implementation or the underlying storage mechanism can be ignored. The data could be stored on tape in a MSS, such as Castor or dCache, or on disks managed by the Disk Pool Manager (DPM)[89]. Third party transfers of data from one SE to another are supported. The Grid File Access Library (GFAL) is used to provide a POSIX-like library for input and output. GFAL can be given LFNs, GUIDs, SURLs or TURLs and can resolve the file by contacting a catalogue if necessary and then contacting the SE. Information Services Monitoring and accounting services gather information from the individual components of the Grid and publish it in a consistent manner. The information system is a hierarchy of LDAP databases which can be queried to obtain information, see Figure 3.9. At the resource level a Generic Information Provider (GIP) is used to provide the information. Each time the GIP is run it obtains static information about the resource from a file and dynamic information from the resource via the appropriate plugin. This information is used to populate a Generic Resource Information Server (GRIS) for each resource. Each GRIS registers with a site Berkely Database Information Index (BDII) and populates the database with information from its resource. Each site BDII then registers with a central BDII which will then contain complete information about the Grid. For performance reasons the central BDII is actually three BDIIs in a round-robin DNS alias. BDIIs can also cache information for up to twenty minutes, which improves performance but can result in stale information being delivered. Another system that can be used for monitoring and information is the Relational- Grid Monitoring Architecture (R-GMA)[90]. R-GMA implements the GGF GMA specification, which defines three components: publishers, consumers and registries, see Figure 3.10. Producers register with the registry, from where consumers can determine which producer can answer their query. R-GMA displays information as if it were contained with a relational database. Clients can connect to consumers and perform SQL queries on the information. The Logging and Bookkeeping system is populated by the WMS and CE as jobs progress through the system. Users can query the status of their jobs via the WMS to obtain the output. Security The Virtual Organisation Membership Service (VOMS)[91] allows for fine grained authorisation to be defined. When the users proxy is created, the VOMS client contacts a central service which assigns additional attributes to the user proxy. The VO administrator can approve membership and add the user to groups, Higgs for example, or specific roles such as administrator. Components of the Grid can then check for these attributes and permit or deny users access based on the attributes attached to their proxy. gLite The gLite middleware is based upon the experience gained developing and running the EDG middleware and LCG grid, combined with some of the ideas from the AliEN[92] middleware. The gLite middleware is currently being deployed on the EGEE infrastructure and will eventually become the default production middleware for the WLCG. The main difference between gLite and LCG middleware are the increased emphasis on a Service Orientated Architecture. Workload Management The gLite WMS uses a similar architecture to the LCG WMS but with different components, see Figure 3.11. The Network Server accepts connections from the UI and passes them on to the Workload Manager. There is also a Web Service interface, WMProxy, to the gLite WMS, with additional functionality including bulk submission. The Workload Manager orchestrates all of the other components to satisfy the job request. The RB uses information from the Information Super Market (ISM) to match jobs with resources that satisfy their requirements. Information can be pushed to the ISM by a CE or pulled from a CE by the ISM. The Workload Manager contains a Task Queue which can be used to hold job requests until suitable resources are found to satisfy the jobs requirements. The Job Adaptor creates information that is required by Condor-C during submission and on the WN during execution. Condor-C (see Section 3.5.2 for a description) is used to perform submission to a gLite CE after being processed by the Job Adaptor. The DAGMan is used to resolve dependencies between jobs that are submitted as Direct Acylclic Graphs. The log monitor watches the Condor-C log file for changes in the jobs status on the WN. Jobs are started on a gLite CE by submitting Condor daemons via the GRAM gatekeeper. Authentication and authorisation are performed by LCAS and LCMAPS at the gatekeeper. These daemons interact with peers on the WMS and are used to submit jobs to the LRMS via the Batch Local ASCII Helper (BLAH) abstraction layer. The gLite CE can operate in push or pull mode. An alternative architecture is available. The Computing Resource Execution and Management (CREAM) CE is a lightweight Web Service interface to manage jobs on a LRMS. Data Management The major addition to gLites data management capabilities over LCG is the File Transfer Service (FTS)[89]. The FTS uses the concept of a channel as a unidirectional connection between two sites. Transfers performed within a channel are monitored to optimise performance and reliability. Users submit transfer requests to the FTS, which then manages transfers using third party GridFTP or srmcp. Monitoring with the Experiment Dashboard The Large Hadron Collider (LHC) is preparing for data taking at the end of 2009. The Worldwide LHC Computing Grid (WLCG) provides data storage and computational resources for the high energy physics community. Operating the heterogeneous WLCG infrastructure, which integrates 140 computing centers in 33 countries all over the world, is a complicated task. Reliable monitoring is one of the crucial components of the WLCG for providing the functionality and performance that is required by the LHC experiments. The Experiment Dashboard system provides monitoring of the WLCG infrastructure from the perspective of the LHC experiments and covers the complete range of their computing activities. The Experiment Dashboard [FIX] monitoring system was developed in the framework of the EGEE NA4/HEP activity. The goal of the project is to provide transparent monitoring of the computing activities of the LHC VOs across several middleware platforms: gLite, OSG, ARC. Currently the Experiment Dashboard covers the full range of the LHC computing activities: job processing, data transfer and site commissioning. It is used by all 4 LHC experiments, in particular by the two largest, namely ATLAS and CMS. Generic functionality, such as job monitoring, is provided by the Dashboard server to all VOs which submit jobs via the gLite WMS. The Experiment Dashboard provides monitoring to various categories of users: Computing teams of the LHC VOs VO and WLCG management Site administrators and VO support at the sites Physicists running their analysis tasks on the EGEE infrastructure To allow for the development and building of components of Dashboard Monitoring applications, a Dashboard framework was designed. This is currently used by other projects and development teams and not exclusively for the development of the monitoring tools. The Dashboard framework is used as well for the construction of the high level monitoring system which provides a global view of LHC computing activities across all LHC experiments, both at the level of the distributed infrastructure in general as well as on the scope of a single site. Web monitoring shows heavy use of the Dashboard servers, for example the dashboard of the CMS VO serves 2300-2500 unique visitors per month with about 30K pages accessed daily. These numbers are growing steadily. The future evolution of the project is driven by the requirements of the LHC community which is preparing for LHC data taking at the end of 2009. The main strategy is to concentrate the effort on common applications which are shared by multiple LHC VOs but can also be used outside the LHC and HEP scope. Reliable monitoring is a necessary condition for the production quality of the distributed infrastructure. Monitoring of the computing activities of the main communities using this infrastructure in addition provides the best estimation of its reliability and performance. The importance of flexible monitoring tools focusing on the applications has been demonstrated to be essential not only for power-users but also for single users. For the power users (such as managers of key activities like large simulation campaigns in HEP or drug searches in BioMed) a very important feature is to be able to monitor the resource behaviour to detect the origin of failures and optimise their system. They also benefit from the possibility to measure efficiency and evaluate the quality of service provided by the infrastructure. Single users are typically scientists using the Grid for analysis data, verifying hypothesis on data sets they could not have available on other computing platform. In this case, reliable monitoring is a guide to understand the progress of their activity, identify and solve problems connected to their application. This is essential to allow efficient user support by empowering the users in such a way that only non-trivial issues are escalated to support teams (for example, jobs on hold due to scheduled site maintenance can be identified as such and the user can decide to wait or to resubmit). Introduction Preparation is under way to restart the LHC[1]. The LHC is estimated to produce about 15 petabytes of data per year. This data has to be distributed to computing centres all over the world with a primary copy being stored on tape at CERN. Seamless access to the LHC data has to be provided to about 5000 physicists from 500 scientific institutions. The scale and complexity of the task shortly described above requires complex computing solutions. A distributed, tiered computing model was chosen by the LHC experiments for the implementation of the LHC data processing task. The LHC experiments use the WLCG[2] distributed infrastructure for their computing activities. In order to monitor the computing activities of the LHC experiments, several specific monitoring systems were developed. Most of them are coupled with the data-management and the workload-management systems of the LHC virtual organizations (VOs), for example PhEDEx [3], Dirac[4], Panda [5] and AliEn[6]. In addition, there was a generic monitoring framework developed for the LHC experiments the Experiment Dashboard. If the source of the monitoring data is not VO-specific, the Experiment Dashboard monitoring applications can be shared by several VOs. Otherwise, the Experiment Dashboard offers experiment-specific monitoring solutions for the scope of a single experiment. To demonstrate readiness for the LHC data taking, several computing challenges were run on the WLCG infrastructure over the last years. The latest one, Scale Testing for the Experiment Programme09 (STEP09)[7], took place in June 2009. The goal of STEP09 was the demonstration of the full LHC workflow from data taking to user analysis. The analysis of the results of the STEP09 and of the earlier WLCG computing challenges proved the key role of the experiment-specific monitoring systems, including Experiment Dashboard, in operating the WLCG infrastructure and in monitoring the computing activities of the LHC experiments. The Experiment Dashboard allows to estimate the quality of the infrastructure and to detect any problems or inefficiencies. Furthermore, it provides the necessary information to conclude whether the LHC computing tasks are accomplished. The WLCG infrastructure is heterogeneous and combines several middleware flavours: gLite[8], OSG [9] and ARC[10]. The Experiment Dashboard project works transparently across all these different Grid flavours. The main computing activities of the LHC VOs are data distribution, job processing, and site commissioning. The Experiment Dashboard covers all the various computing activities mentioned above. In particular, the site commissioning aims to improve the quality of every individual site, therefore ameliorating the overall quality of the WLCG infrastructure. The Experiment Dashboard is intensively used by the LHC community. According to a web statistics tool [11] , the Dashboard server of only one VO, for example CMS, has more than 2500 unique visitors per month and about 30.000 pages are viewed daily. The users of the system can be classified into various roles: managers and coordinators of the experiment computing projects, site administrators, and LHC physicists running their analysis tasks on the Grid. Experiment Dashboard Framework The common structure of the Experiment Dashboard service consists of the information collectors, the data repositories, normally implemented in ORACLE database, and the user interfaces. The Experiment Dashboard uses multiple sources of information such as: Other monitoring systems, like the Imperial College Real Time Monitor (ICRTM) [12] or the Service Availability Monitoring (SAM)[13] gLite Grid services, such as the Logging and Bookkeeping service (LB) [14] or CEMon [15] Experiment specific distributed services such as the ATLAS Data Management services or distributed Production Agents for CMS Experiment central databases such as the PANDA database for ATLAS Experiment client tools for job submission, like Ganga[16] and CRAB[17] Jobs instrumented to report directly to the Experiment Dashbaord This list is not exhaustive. Information can be transported from the data sources via various protocols. In most cases, the Experiment Dashboard uses asynchronous communication between the source and the data repository. For several years, in the absence of a messaging system as a standard component of the gLite middleware stack, the MonALISA [18] monitoring system was successfully used as a messaging system for the Experiment Dashboard job monitoring applications. Currently, the Experiment Dashboard is being instrumented to use the Messaging System for the Grid [MSG][19] for the communication with the information sources. A common framework providing components for the most usual tasks was established to fulfil the needs of the dashboard applications being developed for all the experiments. The schema of the Experiment Dashboard framework is presented in Figure 1. The Experiment Dashboard framework is implemented in the Python programming language. The tasks performed on regular basis are implemented by the Dashboard agents. The framework provides all the necessary tools to manage and monitor these agents, each focusing on a specific subset of the required tasks, such as collection of the input data or the computation of the daily statistics summaries. To ensure a clear design and maintainability of the system, the definition of the actual monitoring application queries is decoupled from the internal implementation of the data repository. Every monitoring application implemented within the Experiment Dashboard framework comes with the implementation of one or more Data Access Objects (DAO), which represents the data access interface: a public set of methods for the update and retrieval of information. Access to the database is done using a connection pool to reduce the overhead of creating new connections, therefore the load on the server is reduced and the performance increased. The Experiment Dashboard requests are handled by a system following the Model-View-Controller (MVC) pattern. They are handled by the controller component, launched by the apache mod_python extension, which keeps the association between the requested URLs and the corresponding actions, executing them and returning the data in the format requested by the client. All actions will process the request parameters and execute a set of operations, which may involve accessing the database via the DAO layer. When a response is expected, the action will store it in a python object, which is then transformed into the required format (HTML page, plain XML, CSV, image) by the view components. Applying the view to the data is performed automatically by the controller. All the Experiment Dashboard information can be displayed using HTML, so that it can be viewed in any browser. Moreover, the Experiment Dashboard framework also provides the functionality to retrieve information in XML (eXtensible Markup Language), CSV (Comma Separated Values), JSON (JavaScript Object Notation) or image formats. This flexibility allows the system to be used not only by users but also by other applications. A set of command line tools is also available. The current web page frontends are based on XSL style sheet transformations over the XML output of the HTTP requests. In addition, in some cases the interfaces follow the AJAX model, triggering javascript issues both in debugging and browser support. Recently, support for the Google Web Toolkit (GWT) [20] was added to the framework. Some of the applications have started to be migrated to this new client interface model, which gives great benefits both for the users and the developers compiled code, easier support for all browsers, out of the box widgets. All components are included in an automated build system based on the Python distutils, with additional or customised commands enforcing strict development and release procedures. In total, there are more than fifty modules in the framework, and fifteen of them being common modules offering the functionality shared by all applications. The modular structure of the Dashboard framework enables flexible approach for implementing the needs of the customers. For example, for the CMS production system, Dashboard provides only the implementation of the data repository. Data retrieved from the Dashboard database in the XML format is presented to the users via a web user interface developed by the CMS production team in the CMS web-tools framework[21]. LHC job processing and the Experiment Dashboard applications for job monitoring The LHC job processing activity can be split in two categories: processing raw data and large-scale Monte-Carlo production, and user analysis. The main difference between the mentioned categories is that the first one is a large scale, well-organized activity, performed in a coordinated way by a group of experts, while the second one is chaotic data processing by members of the huge distributed physics community. Users running physics analysis do not necessarily have enough knowledge about the Grid and profound expertise in computing in general. With the restart of the LHC, a considerable increase of analysis users is expected. Clearly, for both categories of the job processing, complete and reliable monitoring is a necessary condition for the success of this activity. The organisation of the workload management systems of the LHC experiments differs from one experiment to another. While in the case of ALICE and LHCb the job processing is organised via a central queue, in the case of ATLAS and CMS, the job submission instances are distributed and there is no central point of control as in ALICE or LHCb. Therefore, the job monitoring for ATLAS and CMS is a more complicated task and it is not necessarily coupled to a specific workload management system. The Experiment Dashboard provides several job monitoring solutions for various use cases, namely the generic job monitoring applications, monitoring for ATLAS and CMS production systems, and applications focused on the needs of the analysis users. The generic job monitoring, which is provided for all LHC experiments, is described in more detail in the next section. Since the distributed analysis is currently one of the main challenges for the LHC computing, several new applications were built recently on top of the generic job monitoring, mainly for monitoring of the analysis jobs. Chapter 5 gives a closer look at the CMS Task Monitoring as an example of the analysis job monitoring applications. Experiment Dashboard Generic Job Monitoring Application The overall success of the job processing depends on the performance and the stability of the Grid services involved in the job processing and on the services and the software which are experiment-specific. Currently, the LHC experiments are using several different Grid middleware platforms and therefore a variety of Grid services. Regardless of the middleware platform, access from the running jobs to the input data as well as saving output files to the remote storage are currently the main reasons for job failures. Stability and performance of the Grid services, like the storage element (SE), the storage resource management (SRM) and various transport protocols, are the most critical issues for the quality of the data processing. Further on, the success of the user application depends as well on the experiment-specific software distribution at the site, the data management system of the experiment and the access to the alignment and calibration data of the detector known as conditions data. These components can have a different implementation for each experiment and they have a very strong impact on the overall success rate of the user jobs. The Dashboard Generic Job Monitoring Application tracks the Grid status of the jobs and the status of the jobs from the application point of view. For the Grid status of the jobs, the Experiment Dashboard was relying in the Grid related systems as an information source. In the past, the Relational Grid Monitoring Architecture (RGMA)[22] and Imperial College Real Time Monitor were used as information sources for the Grid job status changes. None of the mentioned systems provided complete and reliable data. The current development aimed to improve the situation of publishing the job status changes by the Grid services involved in the job processing, as described later in the chapter. To compensate the lack of information from the Grid-related sources, the job submission tools of the ATLAS and CMS experiments were instrumented to report job status changes to the Experiment Dashboard system. Every time when the job submission tools query the status of the jobs from the Grid services, the status is reported to the Experiment Dashboard. The jobs themselves are instrumented for the runtime reporting of their progress at the worker nodes. The information flow of the generic job monitoring application is described in the next section. Information flow of the generic job monitoring application Similar to the common Dashboard structure, the job monitoring system consists of the central repository for the monitoring data (Oracle database), the collectors, and a web server that renders the information in HTML, XML, CSV, or in an image format. The main principles of the Dashboard job monitoring design are: to enable non-intrusive monitoring, which must not have any negative impact on the job processing itself. to avoid direct queries to the information sources and to establish asynchronous communication between the information sources and the data repository, whenever possible. When the development of the job monitoring application started, the gLite middleware did not provide any messaging system, so the Experiment Dashboard was using the MonALISA monitoring as a messaging system. The job submission tools of the experiments and the jobs themselves are instrumented to report needed information to the MonALISA server via the apmon library, which uses the UDP protocol. Every few minutes the Dashboard collectors query the MonALISA server and store job monitoring data in the Dashboard Oracle database. The data related to the same job and coming from several sources is correlated via a unique Grid identifier of the job. Following the outcome of the work of the WLCG monitoring working groups, the existing open source solutions for the messaging system were evaluated. As a result of this evaluation, Apache ActiveMQ was proposed to be used for the Messaging System for the Grids (MSG). Currently, the Dashboard job monitoring application is instrumented to use the MSG in addition to the MonALISA messaging system. The job status shown by the Experiment Dashboard is close to the real-time status. The maximum latency is 5 minutes, which corresponds to the interval between the sequential runs of the Dashboard collectors. Information stored in the central job monitoring repository is being regularly aggregated in the summary tables. The latest monitoring data is made available to the users. For the long term statistics, data is being retrieved from the summary tables, which keep aggregated data with hourly and daily time bin granularity. Instrumentation of the Grid services for publishing job status information As it was mentioned above, information about any job status changes provided by the Grid-related sources is currently not complete and covers only a subset of jobs. This has a bad impact on the trustworthiness of the Dashboard data. Though some job submission tools are instrumented to report any job status changes at the point when they query the Grid-related sources, this query is done from the users side. For example, when a user never requests the status of his jobs and the jobs were aborted, there is no way for the Dashboard to be informed about the abortion of the jobs. As a result, they can stay in running or pending status, unless being turned into the terminated status with unknown exit code by a so-called timeout Dashboard procedure. To overcome this limitation, the ongoing development aims to instrument the Grid services involved in the job processing to publish any job status changes to the MSG. Dashboard collectors consume information from the MSG and store it in the central repository of the job monitoring data. The services which need to be instrumented and the concrete implementation depend of the way the jobs are submitted to the Grid. The Dashboard collectors consume the information from the MSG and store it in the central repository of the job monitoring data. The advantages of using the MSG are numerous: Common way of publishing information. Common way of communicating between different components. Monitoring information is publicly available. Decreasing the load of the Grid Services. When the jobs are submitted via the gLite Workload Management System (WMS), the LB service keeps full track of the job processing. The LB provides the notification mechanism which allows to subscribe to the job status changes events and to be notified as soon as events matching the conditions specified by the user happen. A new component LB Harvester was developed in order to register at several LB servers and to maintain the active notification registration for each one. The output module of the harvester formats the job status message according to the MSG schema and publishes it to the MSG. Currently, the LB does not keep track of the jobs submitted directly to the Computing Resource Execution And Management (CREAM) [15] computing element (CE). The CEMon service plays a role similar to the LB but only for jobs submitted to the CREAMCE. A CEMon listener component is being developed in order to enable job status changes publishing to the MSG. It subscribes to CEMon for notifications about job status changes and republishes this information to the MSG. Finally, jobs submitted to Condor-G[23], as in the previous case, do not use the WMS service and correspondingly do not leave a trace in the LB. The job status changes publisher component was developed in collaboration with the Condor and the Dashboard teams. Condor developers have added a job logs parsing functionality to the Condor standard libraries. The publisher of the job status changes reads new events from standard Condor event logs, filters events in question, extracts essential attributes and publishes them to the MSG. The publisher runs in the Condor scheduler as a Condor job. In this case, Condor itself takes care of publishing status changes. Job monitoring user interfaces The standard job monitoring application provides two types of user interfaces. First, the so called Interactive User Interface, which enables very flexible access to recent monitoring data and shows the job processing for a given VO at runtime. The interactive UI contains the distribution of active jobs and jobs terminated during a selected time window by their status. Jobs can be sorted by various attributes, for example, the type of activity (production, analysis, test, etc.), site or CE where they are being processed, job submission tool, input dataset, software version and many others. The information is presented in a bar plot and in a table. A user can navigate to a page with very detailed information about a particular job, for example, the exit code and exit reason, important time stamps of processing the job, number of processed events, etc. This application is presented in detail in Chapter 6. Second, the Historical Interface, which shows job statistics distributed over time. The historical view allows following the evolution of the numeric metrics such as the number of jobs running in parallel, the CPU and the wallclock consumption or the success rate. The historical view is useful for understanding how the job efficiency behaves over time, how resources are shared between different activities, and how various job failures fluctuate as a function of time. Summary This chapter introduced the major concepts and components that are required to make Grid computing a reality. In a relatively short space of time the Grid has been created and moved past the hype to provide serious computing power. Several scientific Grids led the way but Grids are now increasingly found in commercial organisations as they provide a flexible, adaptive method of managing their computational loads without increasing expenditure. The major components that form a Grid were discussed and examples given from the major implementations including Condor, Globus, EDG and gLite. Finally, a reliable system to monitor the Grid activities using the Experiment Dashboard was presented. References LHC homepage,https://lhc.web.cern.ch/lhc/ WLCG homepage, https://lcg.web.cern.ch/LCG/ Phedex homepage, https://cmsweb.cern.ch/phedex/ A.Tsaregorodsev et al, Dirac: A community grid solution, CHEP07 Conference Proceedings, Victoria, BC, Canada P. Nilsson, PanDA System in ATLAS Experiment, ACAT08, Italy, November 2008 Saiz, P. et al, AliEn ALICE environment on the GRID, Nucl. Instrum. Meth., A502 (2003) 437-440 https://www.hpcwire.com/offthewire/STEP09-Demonstrates-LHC-Readiness-49631242.html gLite homepage, https://glite.web.cern.ch/glite/ Open Science Grid (OSG) Web Page, https://www.opensciencegrid.org/ Nordugrid homepage, https://www.nordugrid.org/middleware/ CMS Dashboard stats, https://lxarda18.cern.ch/awstats/awstats.pl?config=lxarda18.cern.ch Real Time Monitor home page, https://gridportal.hep.ph.ic.ac.uk/rtm/ SAM paper, asked David, waiting for answer LB homepage, https://egee.cesnet.cz/cz/JRA1/LB/ C. Aiftimiei, P, et al, Using CREAM and CEMON for job submission and management in the gLite middleware, to appear in Proc CHEP09, 17th International Conference on Computing in High Energy and Nuclear Physics, Prague, Czech Republic, March 2009 J. Moscicki et al, Ganga: a tool for computational-task management and easy access to Grid resources, Computer Physics Communication, arXiv:0902.2685v1 D. Spiga et al, The CMS Remote Analysis Builder (CRAB), Lect.Notes Comput.Sci.4873:580-586,2007 I. Legrand, H. Newman, C. Cirstoiu, C. Grigoras, M. Toarta, C. Dobre, MonALISA: an Agent Based, Dynamic Service System to Monitor, Control and Optimize Grid Based Applications, in Proceedings of Computing for High Energy Physics, Interlaken, Switzerland, 2004 James Casey, Daniel Rodrigues, UlrichSchwickerath, Ricardo Silva, Monitoring the efficiency of user jobs, CHEP09: 17th International Conference on Computing in High Energy and Nuclear Physics, Prague, Czech Republic, March 2009 Google Web Toolkit, https://code.google.com/webtoolkit/, S.Metson et al, CMS offline webtools, CHEP07 Conference Proceedings, Victoria, BC, Canada R-GMA homepage, https://www.r-gma.org/ Condor home page, https://www.cs.wisc.edu/condor/ E. Karavakis, et al, CMS Dashboard for Monitoring of the user analysis activities, CHEP09: 17th International Conference on Computing in High Energy and Nuclear Physics, Prague, Czech Republic, March 2009 Agrawal, R., Srikant, R., Fast Algorithms for Mining Association Rules in Large Databases, Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, Santiago, Chile, 487499 (1994) S. Belforte et al, The commissioning of CMS sites: improving the site reliability, 17th International Conference on Computing in High Energy and Nuclear Physics, Prague, Czech Republic, March 2009 GridMap visualization, https://www.isgtw.org/?pid=1000728 EDS HP company homepage, https://www.eds.com/ ALICE monitor, https://pcalimonitor.cern.ch/map.jsp QAOES: https://dashb-cms-mining-devel.cern.ch/dashboard/request.py/qaoes Grid computing is a dream of human beings to achieve more powerful, easy and cheap information processing ability. However, the reality is that Grid technology is still in its infancy. Besides the challenges from finding the technical solutions for interoperability of resources and virtualised utilisation of a large scale shared resources, there are still some social challenges, such as collaboration management, security policies coordination, which are not always within a technical scope.

Sunday, May 17, 2020

Where to Find the Best Topics For Economists

<h1>Where to Find the Best Topics For Economists</h1><p>Economics pugnacious exposition subjects have been built up over a significant stretch of time and with the perfect measure of profundity. Nonetheless, there is no definite strategy to figure out which monetary contentious article subjects are the most ideal for schools that are needing them.</p><p></p><p>It all relies upon the specific understudy that will compose. The instructor should survey diverse composed materials to decide the material most appropriate for a specific understudy. There are two sorts of papers that are utilized by instructors to figure out what kind of class would be proper for a specific student.</p><p></p><p>The first sort of article that is utilized to decide the fitting composing test is the basic examination. This is a sort of paper where an understudy examines a monetary issue and afterward presents an answer or knowledge into how to t ake care of the issue. The inquiries posed in the basic investigation are whether the issues of the monetary framework can be settled or not and in the event that they can, at that point what might the repercussions be. The understudy's experiences are just in the same class as the data accessible to them.</p><p></p><p>The second sort of article that an instructor can use to decide whether the monetary factious paper subjects would be reasonable for an understudy's perspective is a similar exposition. In this paper the understudy gives a short depiction of their contention so as to show that they have made a monetary contention. They may likewise utilize the choice tree as an approach to make the contention stronger.</p><p></p><p>The decisions for expositions ought to be resolved based on the understudy's exploration. The purpose behind the composition of a paper is constantly critical to the understudy and in the event that they can r aise an admirable statement or issue that is pertinent to the subject the instructor will request that the understudy present their essay.</p><p></p><p>Economic pugnacious article points are essential to educators in light of the fact that these themes help figure out what evaluation to provide for an understudy who has exceeded expectations at the composing bit of the class. Know that an article can fall flat due to ineffectively composed material or a poor comprehension of financial matters. A decent author will think of an article that is loaded up with an abundance of realities and figures.</p><p></p><p>If the understudy doesn't have the key focuses, at that point they are going to battle. The teacher will give an impetus for an understudy to compose a superior article on the off chance that they give an away from of view to a certain problem.</p><p></p><p>It is essential to realize that not all monetary con tentious paper subjects require the utilization of science. It is consummately worthy to utilize an authentic guide to demonstrate a perspective. There are additionally examples where the composing tests could be composed utilizing a mix of both science and history so as to demonstrate a point.</p>

Saturday, May 16, 2020

Margaret Mahlers Theory Of Separation Individuation

Margaret Mahler’s theory of separation-individuation suggests that healthy psychological development is based on the child’s ability to gradually separate his or herself from the mother to form an individual identity (Flanagan, 2011). There are four phases of development that a child will go through to attain this individuation: the autistic phase, the symbiotic phase, and separation-individuation proper (Flanagan, 2011). The author can assume that Antonio completed his autistic phase successfully, as he can successfully distinguish between pleasure and pain (Robbins, Chatterjee, Canda, 2011). In addition to this, the case vignette states that he was a healthy baby to whom Hilda was very attentive. Antonio’s symbiotic phase, when the child and the mother are at their most complete union, seems to have progressed without major incident as well; Hilda was attentive to Antonio’s needs and would respond positively to her voice with a head turn and a smile (Fl anagan, 2011). The next phase, separation-individuation proper, includes four sub-phases: differentiation, practicing, rapprochement, and on the way to object constancy (Flanagan, 2011). It is within these sub-phases that Antonio is static; because of the abusive reactions of his father, Daniel, Hilda had to intervene during Antonio’s separation-individuation proper development. Instead of giving Antonio the freedom to crawl around and explore his world during the differentiation phase, Hilda picked him up for fear thatShow MoreRelatedApplying Mahler s Model Of Separation Individuation Essay2475 Words   |  10 Pages3. Apply Mahler’s model of separation-individuation to explain the child’s behavior. Mahler’s model of separation-individuation theorizes that after the first few weeks of infancy, in which the infant is either sleeping or barely conscious, the infant goes from a phase called normal-symbolic phase, in which it identifies itself as one with its mother within the larger environment (Margaret Mahler and the Separation-Individuation Theory). This then leads to the separation-individuation phase thatRead MorePsychotherapy Theories Practices Final Paper 1981 Words   |  8 Pagesp. 26). Through my experience as a client, student and soon-to-be therapist in the field of psychotherapy, I feel that the deeply embedded Western cultural assumptions of individualism enter into the very landscapes of our psyches, psychoanalytic theories, and psychological norms—ultimately permeating into the therapeutic space and coloring the therapist-client relationships. Alan Roland (2006) mentions how, â€Å"Our psychoanalytic emphasis, even in the relational schools, on aut onomy of choices, self-directionRead MoreA Biography Of Margaret S2193 Words   |  9 Pages A Biography of Margaret S. Mahler Krystal Williamson University of Holy Cross Abstract Born in Sopron, Hungary Margaret S Schà ¶nberger was a child with a troubled past. Her father, Gustav Schonberger, was a general practitioner and the chief public health officer for the district. Her mother, Eugenia Weiner, was a competent homemaker. Margaret had a younger sister by the name of Suzanne. The family was described as an upper middle class family, in which the children spoke Hungarian

Friday, May 15, 2020

Sample Essays About Pets

Sample Essays About PetsIf you are about to write a sample essay about pets, then it is important to understand that your topic has a clear beginning and an ending. In addition, the opening and ending of your essay will vary based on the type of pet, what it is, and what the purpose of your essay is. Following are some sample essays for you to look at and determine which of them best fit your needs as a writer.The first one is about animal ethics. This essay should be set up in such a way that it explains what the issue is, how the author came to that issue, and what the consequences are for those who do not live by the ethical code. This might include general examples of situations, such as abuse of animals or the mistreatment of animals in the wild. It might also include more specific examples that describe exactly what kind of ethical behavior the author has seen and been a part of.The second essay is about pets' impact on the environment. This can be very difficult to write since it is a very specific topic. It is very easy to get caught up in thinking about all the ways in which a pet is considered to have an environmental impact, but it is important to know that this essay needs to be organized around what it says and not what you think it should say.The third and last sample essay for you to look at is a question essay. The essay needs to answer two questions, namely, what does a pet represent, and what are the most important roles pets play in society. This essay may take some time to write because you will need to put a lot of thought into these questions and decide if there is a particular pet that you want to bring into the discussion.When it comes to writing the final conclusion, this depends on what kind of an ending you want to use. If you are dealing with a theme of the writer's own creation, then your ending should reflect that. Otherwise, if you want to put a particular theme into your conclusion, then it is important to think about what that t heme is and how it fits into the essay.The last example for you to look at is a statement on how a person defines their pet type. In this case, a classic example of pet ownership would be 'a person who loves their dog like their own children.' For this type of statement, you will want to discuss things like where the dog lives, what the dog eats, and how the dog responds to the owner's love.As you can see, the last example for you to look at is a statement about how a person defines their pet type. This is a very common example of an essay and is often used when dealing with the one-pet-fits-all scenario.If you are going to choose one of these samples for your pet types essay, it is important to understand that they are just a couple of different kinds of samples for you to use when writing an essay on pet types. The best thing you can do is research different ideas and try to find ones that fit what you are trying to write about. You can also find free sample essays online that mig ht be able to give you a better idea of what the best way to write an essay about pets is.

Thursday, May 14, 2020

The Basic Facts of 4th Grade Persuasive Essay Samples

<h1> The Basic Facts of fourth Grade Persuasive Essay Samples </h1> <p>These seven example expositions react to a scope of provocative inquiries. The following is a genuine case of a powerful article contingent upon the framework examined previously. Showing all worksheets related with artistic article. </p> <p>The best influential short articles frequently focus on disputable issues. Astounding rating and awesome surveys should reveal to you anything you desire to comprehend about this brilliant composing administration. There are two or three fundamental rules to follow as an approach to be in a situation to form an incredible convincing article. </p> <p>Stephen's exposition is fairly powerful. The finish of paper, that is the past part, ought to turn into your chance to gain your perusers comprehend the whole purpose of your point. Proof recorded as a hard copy works precisely the same way. </p> <p>A enticing discourse is furnished with the objective of convincing the crowd to feel a specific technique to make a specific move or to help a specific view or cause. Metaphor is a sort of metaphorical language which can be put to use as an abstract strategy. See that the goal of an enticing discourse resembles the reason for composing a factious or influential essaythe hierarchical structure and assortment of information in a powerful discourse would be. Ace the 3 different ways of influence. </p> <p>To start with, you may utilize reasons sponsored by realities to talk in support of you. Give a few clarifications to why you concur or don't concur with this. You are doubtlessly thinking about an astute route on how best to escape from the situation where you in all probability have not ever envisioned. In like manner, it must be consistently finished and express a specific thought. </p> <p>The presentation of each composed work should have an extremely clear proposition proclamation or contention. In numerous segments of content's essential body you should show distinctive difference or analyze focuses. It would be helpful if it's something which you are generally enthusiastic about so you may write in extraordinary data. A, C, and D aren't the best decisions since every one of them offer subtleties and points of interest as opposed to setting up a wide synopsis of the paragraph.</p> <h2> Things You Won't Like About fourth Grade Persuasive Essay Samples and Things You Will </h2> <p>On top of the value, you're need to carry the environment into the condition. The school day should be shorter. Clarify what might make you wish to visit school in the late spring. Depict a few issues you would scan for in your optimal secondary school. </p> <p>And pick an incredible theme that would be captivating to find out about. Understudy co mposing might be utilized to focus on different themes, permitting understudies to learn and see every subject autonomously. True to life composing is a critical marker of how well your child is learning fundamental scholarly aptitudes. Enticing composing can be testing, particularly at whatever point you're made to look with a nearby disapproved of crowd. </p> <p>I welcome all remarks or criticism. Pick one which you hear a solid point of view about. The present exercise may be somewhat dull, however it's significant for future work. Understudies are mentioned to show how a particular sentence might be adjusted or improved or the manner in which the association or advancement of a section might be reinforced. </p> <h2> Understanding fourth Grade Persuasive Essay Samples </h2> <p>Persuade your companion to see the film you want. A few people accept that family has become the most significant impact on youthful grown-ups others accept that companions are the most essential impact. Portray how you would address a phenomenal companion who's furious with you. Convince your sister or sibling to help you talk your folks into something you might want to do. </p>

Sunday, May 10, 2020

The Do This, Get That Guide On Essay Topics on Respect

<h1> The Do This, Get That Guide On Essay Topics on Respect</h1> <h2> The 5-Minute Rule for Essay Topics on Respect </h2> <p>When an individual can't regard himself, he can't regard some other person. Each child should be instructed regard as an approach to exist in the human culture well. </p> <p>Whether your youngster or mate has an incidental awful day or you deal with a family member, companion or collaborator that is incessantly negative, there are things you can do so as to remain positive in the outside of cynicism. For instance, you may discover your folks merit more regard than different people. How you regard individuals changes commonly dependent on the manner in which you see every person and the manner by which they regard themselves. </p> <p>Littering on the school compound should be maintained a strategic distance from at all cost. The state of your emotional well-being includes fruitful subjective reasoning, or the ability to remain centered with supported, separated and specific consideration, and the ability to process data, store information in our drawn out memory, the ability to appreciate what we hear and see, notwithstanding the utilization of coherent handling and thinking. Conceptualize approaches to produce your school condition i ncreasingly deferential. One specific valuable and positive meaning of passionate wellbeing might be the ability to communicate every one of your feelings in an appropriate manner. </p> <p>Thus, for a peacebuilder, it's critical to investigate regard from different edges. At precisely the same time, the absence of regard can bring about clash. As an issue of actuality, there are numerous kinds of regard. There's an individualistic kind of regard additionally, which is very much alluded to as sense of pride. </p> <p>Another sort of lack of regard is discourtesy of oneself because of the deficiency of capacity of one to achieve their targets. You aren't exactly the same as your error. Everyone merits regard paying little heed to what the shade of their skin, paying little heed to what their sexual orientation is, and paying little heed to what their convictions are. At the point when someone is suffocating in cynicism, it very well may be difficult for them to watch the positive. </p> <p>To get regard you should give regard. Regard is something which should be earned not simply given. Respect can't be requested, yet by and by, it should be earned. Generally, regard must be shared. </p> <p>In rundown, the seniors should be indicated a lot of regard and appreciation. Regarding our older folks should be in many situations a graciousness dug in every individual. </p> <h2>The Benefits of Essay Topics on Respect </h2> <p>Dishonest people can't be strict since they aren't dedicated to their religion. Other people, including kids and mates, are not easy to expel from your life. Regarding educators like Mr. Wellman is a fundamental part of instruction and life. Grandparents additionally have a significant impact in our lives. </p> <p>Becoming legit is actually very gainful in the genuine life. Trustworthiness isn't a thing that can be purchased or sold. What's Honesty is an astounding quality which includes being honest and trustworthy in all the aspects of life until the end of time. </p> <p>To start with, in order to be an aware understudy, you must regard the instructor educating you. In foundations of learning, for instance, understudies ceaselessly connect with each other, and their educators and other staff. 1 understudy in the video said you don't have to like an individual to regard her or him. The understudy who's mentioned to set up a top quality research project should take a gander at the specialized association of the content to have the option to fulfill the educator's desires. </p> <h2>Finding the Best Essay Topics on Respect </h2> <p>Our composed exposition on trustworthiness can assist understudies with receiving their goal without any problem. Looking at either side of the issue can help your perusers structure their own conclusions. You have to give your perusers enough data with the goal that they completely acknowledge what you're expounding on. OK, presently you have the basics about how to settle on a useful exposition subject, we should delve into some astounding thoughts! </p> <p>you must form an instructive paper. You in every case instinctively comprehend when a fascinating article thought is actually the absolute best thought for you. Try not to disregard to get a Kibin manager survey your last paper to be certain it's on the correct way. Composing an exposition, one needs to have not too bad comprehension of the wonder so as to break down it and make astute determinations. </p>

Friday, May 8, 2020

Finding the Best College Research Paper Writing Services

Finding the Best College Research Paper Writing ServicesFor students who are too busy with schoolwork to hire professional writers, one can always opt for the best college research paper writing services. Such a service can help students get their college work done in time. It is important to choose the best writer for a project so that the content will be easily understood by the reader.The best college research paper writing services are professional writers who are experienced and qualified enough to handle such assignments. Thus, they are the best choice for the writer. One has to make sure that the best college research paper writing services have an impressive portfolio of projects they have written.When looking for the best college research paper writing services, it is important to consider the quality of the work they are offering. They should be able to produce a high-quality writing project on time. It would be best if you get the job of writing a research paper from one o f the well-known writing services. The clients should know that the service providers are qualified professionals and must be able to handle the projects professionally.Most of the best writing services offer editing and proofreading services. These services can ensure that your work is flawless and the only thing you have to do is to deliver it. A good research paper writing service will have to pay careful attention to the details of the writing project.You have to remember that writing services have to be competitive so that they can compete for the best projects. Therefore, it is important to keep an eye on the latest trends in the market and to stay informed of what the market looks like. This way, you can be able to find the best service providers in the industry.The best writing services should be able to give you a deadline of the project. They should also be flexible enough to handle the deadlines and provide help whenever you need it. Make sure that the college writing ser vice is capable of giving you the best results in the time-frame you want.You have to remember that writing services also have to be able to tackle the deadlines of the projects. Thus, one should choose the service provider who can give them the best assistance in handling the deadlines and work schedules. Also, a writing service is required to maintain good relations with the client.Finally, for finding the best college research paper writing services, you should consider the reputation of the service provider. The best writing services should be in touch with the latest trends in the industry so that they can come up with the best services. Also, they should be able to provide all kinds of services, from researching a topic to writing the research paper. Good quality service providers are the ones who are able to offer the best and latest results in the industry.

Wednesday, May 6, 2020

Andrew Luste and the Date Rape Drug - 597 Words

In 2002, 2003, and 2004, Andrew Luster was convicted of dosing three women with a date rape drug known as GHB (Gamma Hydroxybutyric Acid). The first accuser was a twenty-one year old college student, the second a seventeen year old, and the third accuser has been in a relationship with Luster until she found out that he raped her the first night they met. He was found guilty and is charged with rape, oral copulation, sodomy, a fight to avoid prosecution, and was sentenced to fifty years in jail with one million dollar bail (Wikipedia). Conflict theory would best describe the motive that Andrew Luster portrayed within his crime. This type of theory reflects that humans try to increase their wealth and fortune at the expense of others; to get what they want even in opposition. This is the fundamental view that crime is the economic and social forces within society. The elite and wealthy usually aim for the poor in hopes of de-humanizing them and feeling worthless. This would apply to t he social norms, in which, the social norm is for men to be dominant and women to be submissive (Arndt 2014). Andrew Luster had all the wealth in the world that he could of asked for-being an heir of his great-grandfather who succeeded in the cosmetic industry. What he was after was the power. Clinical psychologist, Dr. Judy Kuriansky, mentioned that to have sex with an unconscious person is having no feelings or needs for the other, and also comments, â€Å"it is infantile, hostile, and shows

Tuesday, May 5, 2020

Strategic Analysis of Kiwi Bank

Question: Discuss about the Strategic Analysis of Kiwi Bank ? Answer : Introduction Kiwi bank is a government owned bank in New Zealand which offers services through post offices. The bank started its operations in the year 2002 to improve the banking system of New Zealand (Kiwi Bank, 2017c). Kiwi bank offers real value for money for the customers. The bank has established its shops in post offices in order to provide its service across the country. The bank has more than 800, 000 customers and the figure is consistently growing every week (Vaughan, 2012). There are more than 1000 employees in the bank with 250 PostShops (Kiwi Bank, 2017c). Thus, Kiwi bank unlike other banks offers its services on weekends as well. Kiwi bank offers the services of personal banking, business banking, international services and other banking services. Personal banking services include home loans, personal loans, savings account, credit cards, everyday accounts and investments and insurance. Business banking services include lending, savings accounts, investments, cheque accounts, cred it cards, merchant services and insurance. International services include online and offline transfer of money, foreign exchange as well as foreign currency accounts. Further, the bank also offers internet banking, mobile banking, message alerts, bank statements (online) and a large network of ATMs. The bank is making an attempt to change the traditional banking style and offer better value to the customers. Kiwi bank has won the international 2007 Financial Innovation Award. The mission of the bank is to win the hearts of Kiwis by offering better services, better products, lower fees and better interest rates (Kiwi Bank, 2017b). Kiwi bank has also proved its financial stability by producing a profit of $124 million during global financial crisis (Mackenzie, 2014)). The banks also paid dividends to all of its owners. Evaluating Strategic Business Plan SWOT Analysis Strengths Large network of sales and distribution across the country Strong brand recognition String financial stability Exponential growth Weaknesses Huge investment in the research and development department Opportunities Evolving new market segments Consistently increasing income of the customers Emerging global market Great opportunity to increase growth and profitability Threats Government rules and regulations External business risks such as bad debts from the defaulters Maintaining cash flow Changes in taxation system Increasing cost of labor and technology Rising rate of interest Technological issues Strengths Kiwi bank has huge distribution and sales network across the country which it has developed within a short span of time. The bank has established its strong brand identity across the country through its quality services and strong distribution network (New Zealand Post, 2017). The bank has strong financial stability and the bank has proven its capability by creating profits during the global financial crisis (Scoop, 2017). This proves that the bank has strong financial structure. The bank has been growing its revenue exponentially and has a high growth rate as compared to its competitors (Radio NZ, 2017). Weaknesses In order to penetrate the market segment, the bank has invested huge amount of money in research and development department due to which it is not able to generate high profits, though the bank would be enjoying the profits in the future. Opportunities Kiwi bank has numerous opportunities such as exploring new market segment. The income of the consumer is consistently increasing in the New Zealand which is a great opportunity for the bank. The bank could target the new market segments in order to increase its existing customer network. Another opportunity for the bank is the emerging global markets for market expansion. Thus, there is a huge opportunity for the bank to increase its growth and profitability. Threats The bank faces challenges due to strict government regulations of the country. Kiwi bank has to abide by all the laws and regulation of the country in order to maintain a smooth flow of its operational activity. There are external business risks involved in the banking industry such as bad debts which could again reduce the overall growth and profitability of the company. Cash flow is another challenge faced by bank. In order to maintain banking operations (personal banking and business banking), the bank has to maintain its cash flow so as to meet the needs of the customers. The government makes amendments in the taxation policies which disturbs the activities of the bank. Kiwi bank is facing the challenge of increasing costs of labor, technology changes, marketing and other operational activities which again affects the profitability of the bank. Rate of interest is on a consistent rise which affects the lending rate of the bank and also has an impact on the customers pocket. Furth er, there are technological problems within the company which again hampers the business of bank. Critical Success Factors Kiwi bank strongly focuses on its customers through its outstanding banking services. Banks transaction account services offer great benefits for the customers. The bank charges no fee for account management and charges a meager amount for transaction. Kiwi banks savings accounts and investments services give the customer an opportunity to earn interest on their tax money. Credit card facility of bank charges very low interest rate giving the time of 55 interest free days to the consumers. Its banking operations allow a consumer to perform their banking operations wherever they want. The visa debit card facility allows them to pay online and overseas for their purchases (Kiwi Bank, 2017a). The business overdraft facility of the bank charges the interest only on the amount which is used by the customer. The business insurance service of the bank offers the customers protection. Advance Research and Development Kiwi bank has made huge investment on its research and development cell in order to identify the needs and requirements of the customers. The bank has very well incorporated the customer requirement within its system in order offer the best services to the customers. The bank has over 250 Post shops across the country which gives it an edge over its competitors (Fulton, 2016). The bank has been consistently thriving to unlock the value in its target market to increase its profitability. In order to do so, the bank has been identifying ways to enhance its relationship with the customers. The bank has devised a strategy of product diversification which aims to offer greater range of products and services to the customers along with value added services. The bank has taken initiative of business markets, identity validation and government services, bancassurance and wealth. Business market aims to expand the current proposition of bank by offering foreign exchange trading and SME lending services to the customers. Identity validation and government services aim to offer ID validation and processing of applications by business. Further, the bank has also invested to build its cost effective processes to improve its operation risk management model. Target Market Kiwi banks primary target market is personal banking market through which the bank has developed over 800,000 customers within a short span of time (Vaughan, 2012). The bank is making consistent efforts in order to penetrate its existing market by enhancing its relationship with the existing customers. Evaluating Implementation of Strategic Business Plan Strategic business plan helps to create goals and objectives for an organization through communication, cultural awareness and regular control and monitor of the activities (Freeman, 2010). Kiwi bank has targeted the personal banking market segment and business banking market segment. Now the bank could spread its wings into the rural business segment order to increase its profitability. Rural business finance, farm development package and Farm start-up package are the financial products through which Kiwi bank could enter new market segment and increase its profitability. Rural Business Finance This product would offer fixed, floating and bill finance options in order to fulfill the unique needs of rural business segment. The customers would benefit through the flexible repayment structure of the program (Rabo Bank, 2017). The tailored interest rate periods would offer value addition to the customers. This program would also offer loan top ups and progressive draw-downs facility to the customers. Further, the customer would also have an option to choose from the conventional interest rates of bank. The service would also offer the customers a flexible line of credit along with low monthly management fee and monthly credit interest (ANZ, 2017)). Farm Development Package The service would aim to bring a complete solution for farmers by giving them access to short term funding through which they can develop their farm in a peaceful manner by complying to the government laws. The farmers would no more have to rely on high interest rate option. The farm development package would ensure that the farmers can invest in their farm activities to ensure environmental sustainability (Rural Finance, 2017). Farm Start-up Package Farmers have been facing challenges to start their business activities. Farm start-up package brings an opportunity for them to develop a realistic business plan. The service would offer them special lending facilities along with banking benefits through which they would be able to attain their farming goals and objectives (SBS Bank, 2016). The service would offer them term loan and overdraft facility up to $50, 000 so that they can set up their pre-operational activity required for the business. The customers would also get an opportunity to attend business planning workshops to support their business activities. In order to implement the plan following methods would be adopted: Strong communication channel Communicating the product awareness across the internal and external customers is an essential component of strategic planning (Hill Jones,2009). The external customers are farmers who are planning to indulge themselves into business activities of farming. The internal customers are the employees and managers of Kiwi bank who needs to understand about the product. Following methods would be adopted for the external customers: Advertisements: The information about the product would be communicated through commercial ads, newspaper ads, bill boards and social media network. The aim of this activity is to create product awareness among the external customers. Sponsoring an event: In order to reach the potential customers, a small event could be sponsored by Kiwi bank within the rural areas to create awareness among the target audience. The internal customers or employees also need to be communicated about the new products and service of bank so that they can persuade the customers to buy the new service. This could be done through training, meetings and discussion with the managers and employees. Effective Monitoring Control System Now once the plan has been implemented, it also needs to be monitored and controlled on regular basis. Effective monitoring and control system would make sure that each and every step is being carried through appropriate methods. Tracking sales: The sales of the new products could be tracked after six months in order to monitor the results of the plan (Baines Fill, 2014). If the sales are increasing then that suggests that the plan was successful. Tracking the competitor: If the competitors have also adopted the same strategy then that suggests that the strategy has been successful to attain the SMART goal. Evaluating the Outcome Increased base of customers due to product attractiveness The sales of Kiwi bank saw a rise its revenue because the product was a great hit among the customers. The farmers who were planning to set up their business received great help from the service through which many of them have established their farming businesses successfully. The bank has created huge base of loyal and permanent customers through its attractive services. Loss of brand identity Kiwi bank has lost its current brand name because in order to meet the requirement of farmers, the bank could not manage its internal operations effectively. Further, the customers also got confused about the brand perception of the bank. Organizational Structure Level of Hierarchy There are three levels of hierarchy including top level managers, middle managers and front level managers. Kiwi bank has administrator level, executive level and lower most level hierarchy. Administrator level includes bank branch manager, financial planning director, auto remarketing manager, Sr. BDM and Commercial Lending Manager. Executive level includes associate branch manager, commercial loan manager, consumer loan manager, commercial credit analyst, branch coordinator. Lower most level includes taxation executive, head clerk, loan executive. Kiwi bank has a huge network of sales and distribution with over more than 800, 000 customers. The bank has more than 250 Post shops across the country. It has more than 1000 staff members with its ATM services located across the country. The bank operates through post offices which gives it an edge over its competitors. Recommendations In order to enter the international market, the bank needs to create a team of senior management who would handle the market of Australia. The bank needs to have a team of Chief Executive Officer, Chief Operational Officer for specific group, Chief Financial Officer, Chief Risk Officer. A Chief Executive Officer for the Australian operations would be appointed to take care of all the financial, marketing, HR and operational activity of the bank. Another Chief Executive Office would be appointed for the New Zealand region to manage the regional affairs. COO, CFO and CRO would be the part of the top management team to handle the affairs and activities of Australia. Further, a team of CEO, Securities and Banking, Private Bank Head, Chairman, Market Research Analytics, Vice Chairman and Talent Acquisition Head would also be developed specifically for the Australian market. The above organizational structure would help the bank to enter the international market. The above management team would be responsible for developing the entire operational team of administrative profile and lower most profile. References ANZ. (2017). Rural Finance. Retrieved from: https://www.anz.co.nz/rural/rural-finance/ Baines, P., Fill, C. (2014). MARKETING 3E P. UK: OUP Oxford. Freeman, R.E. (2010). Strategic Management:A Stakeholder Approach. Cambridge: Cambridge University Press. Fulton, T. (2016) Kiwibank plans returns to Christchurch central city with standalone branch. Business day. Retrieved from: https://www.stuff.co.nz/business/75821216/Kiwibank-plans-returns-to-Christchurch-central-city-with-first-standalone-branch Hill, C., Jones, G. (2009). Strategic Management Theory: An Integrated Approach. Australia: Cengage Learning. Kiwi Bank. (2017a). Accounts and Services. Retrieved from: https://www.kiwibank.co.nz/business-banking/accounts-services/ Kiwi Bank. (2017b). More about us. Retrieved from: https://www.kiwibank.co.nz/about-us/more-about-us/ Kiwi Bank. (2017c). Our Story. Retrieved from: https://www.kiwibank.co.nz/about-us/investor-centre/our-story.asp Mackenzie, D. (2014). Kiwibank deliver NZ post a profit. Otago Daily Times. Retrieved from: https://www.odt.co.nz/business/kiwibank-delivers-nz-post-profit New Zealand Post. (2017). Grow the Bank. Retrieved from: https://www.nzpost.co.nz/about-us/investor-centre/our-financial-story/where-were-going/grow-bank Rabo Bank. (2017). Loans. Retrieved from: https://www.rabobank.co.nz/banking/rural-loans/ Radio NZ. (2017). NZ Post profit down 19 percent after Kiwibank share sold off. Radio NZ. Retrieved from: https://www.radionz.co.nz/news/business/324893/nz-post-profit-down-19-percent-after-kiwibank-share-sold-off Rural Finance. (2017). Its about adding value to you. Retrieved from: https://www.rural-finance.co.nz/news/its-about-adding-value-to-you/49007 SBS Bank. (2016). Rural Loans. Retrieved from: https://www.sbsbank.co.nz/borrow/agribusiness-rural/rural-loans Scoop. (2017). KGH Kiwibank results for the six months ending 31 Dec 2016. Scoop Independent News. Retrieved from: https://www.scoop.co.nz/stories/BU1702/S00583/kgh-kiwibank-results-for-the-six-months-ending-31-dec-2016.htm Vaughan, G. (2012). How the C-operative Bank wants to be as big as Kiwibank by attracting customers with a caring attitude. Interest. Retrieved from: https://www.interest.co.nz/news/61589/how-co-operative-bank-wants-be-big-kiwibank-attracting-customers-caring-attitude