Friday, December 27, 2019
Thursday, December 19, 2019
Salesoft Analysis - 7474 Words
Case 15 SaleSoft, Inc. (A) Synopsis Greg Miller and Bill Tanner, Executive Vice President and CFO, founded SaleSoft in July 1993 with the objective of marketing PROCEED, a Comprehensive Sales Automation System (CSAS). While PROCEED had received very favorable responses from prospects, converting interest to actual sales was taking a long time with only five PROCEED systems having been sold to-date. In September 1995, with limited funds and the need to show performance before seeking additional venture capital, Gregory Miller, the president and CEO of SaleSoft, and William Tanner, the executive vice president and CFO, now need to decide the future course of action for their company. They are faced with the question ofâ⬠¦show more contentâ⬠¦3. Understanding sales and marketing issues faced by a start-up operating in an embryonic market by comparing/contrasting approaches required to sell products of varying complexity. 4. Exploring the role of a vendors organizational structure in defining its ability to implement marketing strategy. 5. Understanding the role of automation in linking sales, marketing, and service functions in a firm. Recommended Readings â⬠¢ Major Sales: Who Really Does the Buying? ââ¬â HBR-82305 â⬠¢ Automation to Boost Sales Marketing ââ¬â HBR 89105 Teaching Questions 1. What is your plan? Do you plan to continue with PROCEED or will you introduce the TH product? Provide support for your plan. 2. What is the buying cycle for PROCEED? Who are the people involved in the purchase of a CSAS solution? What is the role of consultants? 3. What is SaleSoftââ¬â¢s current approach to selling PROCEED? 4. Quantity the benefits of CSAS to a customer using the information given in Exhibit 7. 5. What value does TH provide a customer? How is this different form the customer value delivered by PROCEED? 6. What is a Trojan Horse? How does it facilitate customer acquisition and retention? 7. How will you price TH? 8. How do you think SaleSoftââ¬â¢s organization structure will affect its ability to sell PROCEED or TH? Details of the Discussion Flow What is your plan? Do you planShow MoreRelatedSalesoft Inc1206 Words à |à 5 PagesSalesoft Inc. SaleSoft Inc is faced with making a tough, time constrained and strategic choice to either continue with itââ¬â¢s PROCEED software development or redirect the efforts of the entire companyââ¬â¢s workforce to deliver the Trojan Horse (TH) product in time for the sales automation conference. The decision taken will have to be absolutely exclusive and is expected to have a critical impact on the companyââ¬â¢s future and indeed itââ¬â¢s very viability. I would recommend the company toà 1.à à à FullyRead MoreSales and Trojan Horse2279 Words à |à 10 PagesCase analysis Of SaleSoft, Inc. (A) SITUATION ANALYSIS: SaleSoft was founded in July 1993 with the objective of marketing PROCEED, a Comprehensive Sales Automation System (CSAS). Despite the fact that there was a good level of enthusiasm amongst the prospective buyers, the high level of supply time was a drawback with a meagre five such systems being sold till date. Thus converting the interest to sales was a real problem. Now, to seek additional funding from the Venture Capitalists for future
Wednesday, December 11, 2019
Capability Larger a Bank Financial Risks and Asymmetric Information
Question: Discuss about the Larger a Bank the Better is its Ability to Diversify. Answer: The statement to assess is the capability of larger banks. The topic suggests that larger banks should not be limited by the regulators as the larger pool of the asset means diversification of the portfolio. From the regulators perspective it means higher risk which has potential to engulf lot out of financial system. Large banking demands are a key business driver for regulator decisions in this industry sector. A large bank uses a broad range of technology-centric capabilities that enable new methods of interaction and service delivery to augment customer experience and potentially transform the business (Admati, 2014). These capabilities are supported by a robust, dynamic and accessible large infrastructure and an open banking system that transforms the analog environment. However, for many banks, their application portfolio represents transactional interaction support that matches consumer expectations of banking from decades past (Khairi, 2015). The demands of today's banking customers transcend ATM ubiquity and the convenience of branch banking locations. Traditional banks that aspire to compete with disruptive financial technology firms have to focus on the formerly defined mid-office and back-office technologies with specific attention on supporting hardware platforms. The challenge for many of today's modernization projects is not simply a change in technology, but often a fundamental restructuring of application architectures and deployment models. Mainframe hardware and software architectures have defined the structure of applications built on this platform for the last 50 years. Tending toward large-scale, monolithic systems that are predominantly customized, they represent the ultimate in size, complexity, reliability and availability. Today's modern computing environments represent a very different model. Commodity x86, scale-out environments define a different set of technologies, database management system (DBMS) platforms and architectures. Modernizing legacy systems from a mainframe architecture to a distributed one is a major challenge for any large-scale financial institution (Grubel2014). There is a distinct retreat by bank CIOs from high-cost, complex mid-office and back-office environments to easier-accessed functionality promoted by component-based solutions. Simplification, agility and operational efficiency are the primary drivers behind banks' efforts to abandon legacy solutions and past deployment practices. While many organizations that are dependent on mainframe architectures could have modernized their existing portfolios over time in an evolutionary fashion, many chose to avoid the risk of change and preserve their extant systems, continuing to leverage their current staff resources. As the reality of the demographic shift of baby boomers became clearer to many organizations, this risk-averse option is becoming more problematic (Laeven2013). Now, the ability to attack such a massive application portfolio and restructure it for modern languages and platforms is seen as a great risk. The procrastination of yesterday's regulator organizations is now driving di fferent modernization decisions, including moving to commercially available packaged solutions for many use cases, including core banking. These capabilities are supported by a robust, dynamic and accessible large infrastructure and an open banking system that transforms the analog environment. However, for many banks, their application portfolio represents transactional interaction support that matches consumer expectations of banking from decades past (Schludi, 2015). In the face of new bank entrants, financial institutions are under increasing pressure to formulate and execute large banking strategies. Designed to transform traditional banking models, these strategies are predominantly technology-supported initiatives impacting customer-facing channels through to the back office. As bank CEOs expect large revenues to dramatically expand to 47% by 2019, CIO-led large banking programs are expected to have a significant impact on upcoming regulator decisions (White, 2014). Accommodating new areas of regulator investment for large banking have to be offset with corresponding reductions in tactical, regulator commodity spending. Many banks are pursuing core banking renewal to simultaneously reverse traditional regulator spending patterns and replace them with lower-cost, agile platforms. This is easier said than done, as many of the existing regulator systems in support of the overall bank are mission-critical and demand high availability, reliability and performance. Shifting from scale-up architectures (such as the mainframe) to scale-out environments (commonly x86) requires significant investment in understanding the existing systems in great detail, but also a rethinking of the implementation in a modern, multiserver world. Banks need to be able to justify the cost and risk of any modernization project. This can be difficult in the face of a well-proven, time-tested portfolio that has represented the needs of the banking system for decades. However, the demands for modern banking solutions, which are increasingly targeting a different demographic, require extensive change to the existing systems. The alternative has been to add layers of technology on top of the existing legacy systems, which tend to increase cost and complexity (Reinhart, 2013). Many modernization inquiries from our customers are not simply about an aging technology stack, or even aging workforce, but rather about fundamental changes in the business. Banking CIOs should embrace a large-first, outside-in thought process when modernizing their legacy portfolios. Modern consumers of banking have expectations set by their experiences with Amazon, Google or Facebook. These expectations are less predictable than in the past and make it more difficult to instantiate a business process in code and expect that code to last for decades, as has been the case for much of the extant banking portfolio (Martins, 2014). Today's systems must be built to change, and not to last! Dependent on a bank's market strategy and segmentation, organizations are increasingly considering broad-based, functional, packaged solutions for existing systems of record, or the usage of commercial off-the-shelf coarse-grain components (BPM, BI query, analytics, report writers) to implement their replacement systems. In support of large banking ambitions, many industry CIOs aim to migrate resources from commodity systems (such as core banking) and redirect them to differentiating technologies that directly impact the customer experience. References: Admati, A., Hellwig, M. (2014).The bankers' new clothes: What's wrong with banking and what to do about it. Princeton University Press Schludi, M. H., May, S., Grsser, F. A., Rentzsch, K., Kremmer, E., Kpper, C., ... Edbauer, D. (2015). Distribution of dipeptide repeat proteins in cellular models and C9orf72 mutation cases suggests link to transcriptional silencing.Acta neuropathologica,130(4), 537-555 Khairi, M. S., Baridwan, Z. (2015). An empirical study on organizational acceptance accounting information systems in Sharia banking.The International Journal of Accounting and Business Society,23(1), 97-122 Grubel, H. G. (2014). A theory of multinational banking.PSL Quarterly Review,30(123) Reinhart, C. M., Rogoff, K. S. (2013). Banking crises: an equal opportunity menace.Journal of Banking Finance,37(11), 4557-4573 White, E. N. (2014).The regulation and reform of the American banking system, 1900-1929. Princeton University Press Laeven, L., Valencia, F. (2013). Systemic banking crises database.IMF Economic Review,61(2), 225-270 Martins, C., Oliveira, T., Popovi?, A. (2014). Understanding the Internet banking adoption: A unified theory of acceptance and use of technology and perceived risk application.International Journal of Information Management,34(1), 1-13
Tuesday, December 3, 2019
The Controversy of Clinical Versus Actuarial Prediction Essay Example
The Controversy of Clinical Versus Actuarial Prediction Paper In clinical prediction, psychologists use their clinical experience to formulate a prediction based on interview impressions, history ATA and test scores (Melee, Clinical versus Statistical 4). The formula in the title refers to statistical or actuarial prediction. In actuarial prediction, clergies access a chart or table which gives the statistical frequencies of behaviors (Actuarial Prediction). Advocates of the clinical method say that clinical prediction Is dynamic, meaningful and sensitive but actuarial prediction Is mechanical, rigid and artificial (Melee, Clinical versus Statistical 4). On the other hand, advocates of the actuarial method claim that actuarial method Is empirical, precise and objective but alnico prediction Is unscientific, vague and subjective (Melee, Clinical versus Statistical 4). The controversy of clinical versus actuarial judgment is not limited to the field of psychology; it also affects education in terms of predicting school performance, criminal justice system in terms of parole board decisions and business in terms of personnel selection. Although this controversy can be traced back half a century ago, social scientists today are still asking: Which of the two methods works better? Can we view any prediction dichotomously as either clinical or actuarial? And, if actuarial predictions are more accurate, should we abandon clinical predictions all together? On one side of the controversy, some people feel that using mere numbers to determine whether students can enter graduate schools or whether prisoners should be released Is dehumidifying (Melee, Causes and Effects 374). We will write a custom essay sample on The Controversy of Clinical Versus Actuarial Prediction specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The Controversy of Clinical Versus Actuarial Prediction specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The Controversy of Clinical Versus Actuarial Prediction specifically for you FOR ONLY $16.38 $13.9/page Hire Writer In her book about social psychology, Thompson describes a young woman who complains that It Is horribly unfair that she has been rejected by the Psychology Department at university of California on the bases of mere numbers, without even n interview (88). When my psychology teacher surveyed our class on this issue, about 20 percent of students believe that it is unethical to make predictions based on mere numbers (Brenner). The crux of this ethical concern lies on the belief that each individual is so unique that rigid statistics or equations cannot make the correct prediction in every single case. Indeed, most psychologists agree that rigid statistics are not sensitive to special cases. Paul Mà ªlà ©es well-known broken-leg example Illustrates how the special powers of the clinician can predict behaviors ore accurately in some special cases: If a sociologist were predicting whether Professor X would go to the movies on a certain night, he might have an equation Involving age, academic specialty, and Introversion score. The equation might yield [a very high probability] that Professor X will go to the movie tonight. But if Professor X Ana Just Darken Nils leg Ana en Is In a nil cast Tanat wont NT In a denature seat, no sensible sociologist would stick with the equation. (Clinical versus Statistical 24-25) Essentially, it is very important for clinicians to detect the characteristics of each unique individual and make predictions accordingly because clinicians deal with individual cases; they make predictions for each unique individual, not for a group of people. Thus, it is the individual case that defines the clinician (Melee, Clinical versus Statistical 25). Because of the insensitivity of statistics to special cases and the importance of predicting individual cases, many psychologists argue that statistics simply cannot apply to individuals (Melee, Causes and Effects 374). They believe that clinicians can make predictions about individuals can transcend the predictions bout people in general (Melee, Causes and Effects 374). For example, Patriots emphasized in his research on personality inventory that: In [nonproductive] tests, the results of every individual examination can be interpreted only in terms of direct, descriptive, statistical data and, therefore, can never attain accuracy when applied to individuals. Statistics is a descriptive study of groups, not of individuals. (633) On the other side of controversy, advocates of the actuarial approach have questioned the logic behind the assumption that statistics do not apply to single individuals or events. Stanchion uses a very good analogy to illustrate the fallacy behind this assumption (179). He asks us whether we want our operation done by an experienced surgeon who has a low failure probability or an inexperienced surgeon who has a high failure probability (179). Of course, any rational man will choose the experienced surgeon. However, if we believe that probabilities do not apply to the single case, we should not mind to have our operation done by the inexperienced surgeon. This question brings us to think about the role of chance in making reductions. Stanchion noted: Reluctance to acknowledge the role of chance when trying to explain outcomes in the world can actually decrease our ability to predict real-world events Acknowledging that our predictions will be less than 100 percent accurate can actually help us to increase our overall predictive accuracy. (175) An experiment done by Fainting and Subsidiaries (58-63) demonstrates Stanchions last point that we must accept error in order to reduce error. In this experiment, the participant sits in front of a red light and a blue light and is asked to predict which eight will be flashed on each trial (60). The experimenter has programmed so that the red light will flash 70 percent of the time and the blue light 30 percent of the time (59). Participants quickly pick up the fact that the red light is flashing more, thus they predict the red light roughly 70 percent of the time and the blue light roughly 30 percent of the time (62). The problem is that they do not understand that if they give up on trying to predict correctly on every trial, they can actually be more accurate. We can demonstrate the logic of this situation through a calculation on 100 trials. In 70 of the 100 trials, the red light will come on and the participant will be correct on about 70 percent of those 70 trials. That means, in 49 of the 70 trials (70 times . 70), the participant will correctly predict that red light will flash. In the same way, we can calculation that approximately in 9 trials (30 times . 30), the participant will correctly predict that the blue light will flash. Therefore, the participant can only predict correctly 58 percent of the time (49 percent from the red light and 9 percent from the (B). However, IT ten participant simply gives up on getting every trial relent Ana just predicts the red light on every trial, he can predict correctly 70 percent of the time (because the red light will come on 70 percent of the time), which is 12 percent better than switching back and forth trying to get right on every trial. This is what Stanchion means by accepting error in order to reduce error. Research on this controversial issue has consistently indicated that actuarial prediction is more accurate than clinical prediction. In Paul Mà ªlà ©es classical book Clinical versus Statistical Prediction, he had reviewed 22 studies comparing clinical and actuarial prediction (83-126). Out of these 22 studies, twenty show that actuarial prediction is more accurate than clinical prediction. These twenty studies cover almost all the clinical prediction domain, including psychotherapy outcome, criminal recidivism, college graduation rates, parole behavior and length of psychiatric hospitalizing. A graduate student at JIBE had also done a study comparing clinical and actuarial prediction (Simmons 3). In this study, Simmons compared the predictions made by a regression equation and by two experienced counselors on the school performance of JIBE freshmen (Simmons 3). The results again indicate the actuarial prediction using the regression equation was more accurate (Simmons 64). In addition, a recent meta-analysis using 136 studies has also confirmed that actuarial prediction is better regardless of the Judgment task, type of Judges, or Judges amount of experience (Grove et al. 9). Researchers found that actuarial prediction substantially outperformed clinical prediction in 45 percent of the studies whereas clinical prediction was more accurate in only 10 percent of the studies (19). Regarding the research consistently showing that actuarial prediction is more accurate , Paul Melee said, There is no controversy in social science which shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one (373-374). Mà ªlà ©es actuarial stance is strongly challenged by Robert R. Holt, who is also a renowned clinical psychologist. Holt criticizes that the twenty studies Melee cited in his book only focus on the final step of the prediction-making process, which is making the prediction (339). Holt rejects the dichotomous classification of studies as clinical or statistical because in field settings, clinicians do not simply make a prediction by evaluating the given data (338). In field settings, before the clinician can make the prediction, he has to carefully identify the criterion he can predict and choose predictive variables he wants to use. (Holt 339-340). For example, if a counselor wants to predict the school performance of first year university dents, he first identifies the criterion he is able to predict; the criterion can be Gaps or average marks of the students, but it can also be the students lecture attendances. He also has to choose which predictive variables he should use; he may use the students entrance grades or their scores on an aptitude test or a combination of both. Then, finally, he can make the prediction using either an equation or his own Judgment. This example shows that even if the clinician uses actuarial approach in the final step of the prediction-making process, he still plays an important role in all the preceding steps. I agree with Holt that Melee has oversimplified the distinction between clinical and statistical prediction. I believe that we should view these two methods as falling on a continuum rather than make an all-or-none distinction. Some predictions that can be completely done on computers are more statistical toner protections, Tort wanly psychoanalysts need to collect Ana analyze data, are more clinical. I also agree with Holt that we should still value clinical judgment although it is not as accurate. Without clinical Judgment, scientists will to be able to form hypotheses and theories, and to analyze research results and data. Like Western and Weinberg said in their article reviewing this controversial issue, try as we might to eliminate subjectivity in science, we can never transcend the fact that the mind of scientists, clinicians or informants is the source of much of what we know (609). Nevertheless, when countless research findings point toward one direction, I think we should recognize that actuarial predictions are more accurate than clinical predictions (at least in the final step of the prediction-making recess). Some people think that using mere numbers to make predictions is dehumidifying. They feel that using an equation to forecast a persons action is treating the individual like a white rat or an inanimate object (Melee Causes and Effects 374). However, I argue that in certain cases, it is unethical to use clinical judgment when actuarial approach has shown to be accurate. For example, when a clinical psychologist makes a prediction about whether a student is going to commit suicide within a year, would it be more ethical to use the actuarial prediction that is here times more accurate than the clinical prediction (Brook et al. 03)? The answer to this question should be as obvious as the question about whether we want our operation done by an experienced or an inexperienced surgeon. By admitting that actuarial Judgment is more accurate, clinicians who engage in activities in the role of experts and imply that they have unique clinical knowledge of individual cases may lose prestige and income; however, the field of psychology, and society, will benefit if we underst and that accepting error is reducing error.
Subscribe to:
Posts (Atom)