Wednesday, July 31, 2019

Review of Related Literature and Related Studies about Mobile Phone

Foreign According to [ (Miller, 2013) ], a mobile phone is a wireless electronic device used for telephone and multimedia communications. Which means people can bring it and can communicate anywhere at anytime. [ (Singh, 2011) ] said that communication is the process to express his thoughts, ideas, and messages, from one person to other person for the sake of personal interest or business interest. Communication is more effective if you receive the response from other person. You can express his thoughts to another person by verbal communication, non-verbal communication or by mass communication. (McGuigan, 2013) ] Stated that Text messaging is a term for short communications made through cell phones. It uses what is called the Short Message Service, and so is often called SMS for short. It is also sometimes referred to as txting, using the shorthand common in such messages as a way of dealing with short character limits and often bulky interfaces. [ (Ziggs, 2011) ] proposed that age s 13 to 17 ends the highest number of text messaging, sending and receiving an average of 1,742 text messages per month. (Jenna Langer, 2009) ] said that men prefer to use communication to gain social status and use their social networks in a task-oriented manner (2).Face-to-face communication differences between genders and has been shown to cross over into e-mail and computer-mediated communication makes women communicate more thoroughly because of the lack of nonverbal cues. [ (Amanda Lenhart, 2010) ]One major influence has to do with the economics of the cell phone – who pays for the costs associated with the cell phone and its use and what are the limitations on the service plan for the phone?Does the user have unlimited minutes to talk or the ability to share minutes? Does he or she have an unlimited or pay-as-you-go text messaging plan? And regardless of who pays, what type of plan does the teen have? A shared family plan, an individual plan with a contract, or a contr act-less pre-paid phone? Each of these variations can influence how teens and adults use their mobile phones. [ (Amanda Lenhart, 2010) ] About one in five teen cell phone users (18%) are part of a prepaid or pay-as-you-go plan, and just one in ten (10%) have their own individual contract.The type of cell phone plan a teen has is significantly related to household income. Teens from lower income households are more likely to use prepaid plans or to have their own contract, while teen cell phone users in households with incomes of $50,000 or greater are most likely to be part of a family plan. Local [ (Celdran, 2002) ] declared that The characteristics of connectivity, speed, cost effectiveness, mobility and confidentiality of text messaging and its adaptability to Filipino culture has made SMS the most popular form of private communication technology in the country.BibliographyTeens and Mobile Phones. Retrieved March 10, 2013, from Pew Internet: http://pewinternet. org/Reports/2010/T eens-and-Mobile-Phones/Chapter-1/The-economics-of-cell-phones–Plan-Types. aspx Celdran, D. (2002).The Philippines: SMS and Citizenship. Retrieved March 10, 2013, from http://www. dhf. uu. se/pdffiler/02_01/02_1_part9. pdf: http://www. dhf. uu. se/pdffiler/02_01/02_1_part9. pdf Jenna Langer, V. J. (2009).Gender Differences in Text Message Content. Retrieved March 10, 2013, from http://www. jennalanger. com: http://www. jennalanger. com/wp-content/uploads/2010/09/LangerJenna-Gender_dif_SMS_Content. df McGuigan, B. (2013, March 08).What is Text Messaging? Retrieved March 10, 2013, from wiseGEEK: http://www. wisegeek. com/what-is-text-messaging. htmMiller, B. (2013, March 05). What Is a Mobile Phone? Retrieved March 8, 2013, from wiseGEEK: http://www. wisegeek. com/what-is-a-mobile-phone. htmSingh, H. (2011, July 05). Communication plays an important role in our daily life. Retrieved March 10, 2013, from India Study Channel: http://www. indiastudychannel. com/resources/142618-Com munication-plays-an-important-role-our. aspx Ziggs, D. (2011, February 09). Average Monthly Calls Vrs

Tuesday, July 30, 2019

Economic Instruments For Protecting The Environment Economics Essay

Economic instrument ‘s aim is to alter the behavior of environmental devastation by puting cost on the polluters while statute law ‘s aim is to alter the polluter ‘s behavior by puting Torahs or restricting some patterns. Traditionally, both authoritiess and concerns have preferred to utilize legislative instruments over economic instruments as environmental policy. It is because they think economic instruments is can non alter the behavior of polluter straight and certain sums of uncertainness are involved. From the position of authoritiess, they afraid rising prices may be caused by extra charges and the low-income group will be affected by the unsought distribution consequence. The populace may believe that companies can obtain the pollution right if they able to pay for the pollution charges. Similarly, from the position of concerns, they do non prefer to utilize economic instruments since the costs would be increased by the extra charges, and they have influence on statute law by dialogue. Charge is the most common manner under price-based step. A monetary value that polluters have to pay for what environmental pollution they have done can be considered as charge ( OECD, 1989 ) . Charges can be classified as user charges, merchandise charges and wastewater charges. To forestall resource maltreatment, users of resource should pay for user charges. To promote recycling or discourage disposal, the merchandise monetary value would be added by the merchandise charges. To forestall H2O pollution, wastewater charges would be used and the payments depend on constituents and measure of a company ‘s sewerage. Normally, authoritiess would maintain the wastewater charges at a low degree in order to forestall equivocation of charges by illegal dumping. There several statements about the effectivity of price-based steps and legislative steps on pollution control. Literature reviews about these statements is presented in the undermentioned paragraphs. The principle The environmental economic experts, such as Schelling ( 1983 ) , Pearce et Al. ( 1989 ) , Tietenberg ( 1990 ) and Ekins, P. ( 1999 ) outline a standard position in texts and articles. There are arguments that decrepitude of environmental is because of the system of market failed to add environmental value. Savage and Hart ( 1993, p. 3 ) indicated that most of economic sciences believe that â€Å" doing the polluters to obey on the mechanism of market is the most effectual ways to undertake with jobs of environment † : A monetary value should be placed to people who want to utilize environmental resources till to guarantee that the societal costs are non larger than the societal benefits. So there costs and benefits should be measured. In order to do the benefit and costs to be mensurable, the environment should be turned into marketable. Then there have a pollution rights markets, presenting subsidies or revenue enhancements as monetary values to reflect pollution cost to society and cost of pollution right quotas. ( Savage and Hart, 1993 ) Measures under market-based are similar to price-based step that puting a monetary value and finding demand on the sum of pollution discharged ( Schelling, 1983 ) : The disposition of economic experts to work outing the market jobs is an ideologically based one: their major basicss come from the perceptual experience of Adam Smith that self-interest universe ‘s single development, in a competitory market system and societal benefits are maximized. The economic sciences is entrenched by this tradition doctrine that most of economic experts probably do non recognize, except they go out into the non -economists ‘ universe, that it is a moral doctrine premise†¦ ( Kelman, 1983, p. 297 ) Although it is non persuaded by every economic expert, the attack of neoclassical which the environmental economic sciences ranges and surveies cover this doctrine ( Rosewarne, 1993 ) . In the world, given the markets workings and the imperfectness is well-elaborated and problems related to it ( Moran and Wright, 1991 ) what indicated that environmental economic sciences and statements of sustainable development issue are dominated by the neoclassical economic sciences. Internalizing costs of environmental Some resources of environmental – for illustration purchasing and merchandising environmental resources in the market though the true cost of acquiring the environmental resource ever does non indicated from their monetary values since the monetary values are non include the cost in the environmental devastation. Other resources of environment, for case, there are non paid at all in utilizing clean H2O and therefore economic experts viewed as free. There are arguments between economic experts that environmental assets are likely to be raddled or mistreated as the monetary values are excessively low. The statements between the economic sciences that external benefits and costs that market minutess are non considered as â€Å" internalized † by altering monetary values. The external cost which is caused by the company from supplying services or goods is apt to pay or this. Charges or revenue enhancement is a possible ways to work out this job ( Bailey, 2002 ) . For case, dumping the sewerage into the watercourse by the company, and so the cost of lost leisure environment is covered by bear downing a fee. Price-based instruments for illustration, charges and revenue enhancements, are theoretical to do external costs portion of the polluter ‘s consideration. Although jurisprudence besides can restrict the pollution discharge to the, the economic experts still prefer utilizing make-based for the pollution control. Advocated by economic instruments, Thomas Schelling ( 1983, p. thirteen ) , states Environmental Protection Incentives in his book that â€Å" if pricing mechanisms is designed good, it can acquire regulative criterions with good – designed and reasonable. † And all parties believe that legislative instruments can non be to the full replaced by economic instruments. Practically, the environmental policy should be a mix of market-based instruments, criterions and Torahs. The optimum pollution degree is theoretical to be the degree at cleaning cost equal the environmental harm cost ( Samuelson, 1954 ) . Some economic experts debate that making the optimum harm degree is the most efficient in market. Since optimum degree of harm or pollution is ever non zero, many people feel unusual and abhorrent. But the optimum degree is the cardinal premise of the theory of internalising costs under price-based instruments. If the environmental harm cost is equal to the monetary value charge, theoretically, the pollution will be cleaned up by the house until the residuary charge would be less than the incremental pollution decrease. It means that the degree of pollution decrease addition until the charge payment is less expensive than the pollution decrease. It is efficient economically since the benefits will be offset if more costs of excess pollution control are spent by polluter. To the society, it seems non an optimum solution. However, economic experts debate that the societal cost caused by polluters is non in the worst place if they had paid to all cost of pollution riddance and there is no worse to society sine the company counterbalance the harm by paying to the authorities. Theoretically, the companies ‘ payment in the charges form is a method to rectify their harm on the environment ( Beder, 1996 ) However, there are diverges between theory and world. The first consideration is whether the fortunes of environmental devastation can be corrected by enforcing monetary value on the polluters, world and theory diverge issues. The 2nd consideration is whether the pollution charges collected are used to undertake environmental jobs. Argument comes out that the society is still no worse – off if we spent the money on something every bit valuable. But this position is difficult to accept by the pollution suffer. The other statement is presuming that the replacing environmental benefit by purchasing other benefits on the market. Yet, countering by conservationists that other goods can non replace the environmental quality ( Goodin, 1992 ) and that human -made capital and natural can non replace absolutely ( Costanza and Folke, 1994 ) . Actually, the costs internalized assume that wage for the environmental devastation is most preferred to avoiding the devastation. There besides have a theory premise that the optimum harm point is that the cost of pollution decrease is more dearly-won and dearly-won while the environmental addition is smaller and smaller ( see Fig.1 ) . The thought based by this rule that if company changes production procedure by adding pollution control equipment can accomplish the purpose of pollution decrease. In the long term, these production processes alteration may assist companies salvage money. This can non be easy assumed that the environmental devastation done is equal to the charges. Daly and Cobb ( 1989 ) indicate that, â€Å" economic loss ‘s rating is capable to uncertainness and broad divergency, but non physical consequence merely. † Practically, regulative bureaus and authoritiess do non seek to associate external costs to revenue enhancements or charges. Charges can be used to obtain income to cover the costs of programme to undertake jobs of pollution. However, charges normally are designed to make an inducement for polluters to minimise the discharges. So this reflects that the costs of devastation they cause are non to the full paid by polluters. Therefore, the economic instrument ‘s major purpose is to internalise environmental costs and seeking the optimum pollution degree. However, it is hard to accomplish. Environmental effectivity and inducements Jacobs ( 1993 ) points out economic experts argue that enforcing costs, even though the polluting activity ‘s existent environmental costs are non internalized, but pollution decrease inducement is provided for houses and money can be saved as a consequence. There besides an statement that regulative criterions may do certain company achieve pre-determined bound marks, but there may be deficiency of inducement for company to cut down farther pollution while prove-based instruments provide fiscal inducement. Stavins and Whitehead ( 1992 ) advocated that â€Å" go on actuating the companies to better the fiscal public presentation by engineering development. Then the companies can cut down the pollutant outputs. † If the economic instruments are decently structured, the companies can be motivated to follow and prosecute in betterment and invention uninterrupted † ( Grabosky, 1993 ) . Economic determinism assumes that the desirable technological alterations will automatically happen under suited economic conditions ( Baranzini et al. , 2000 ) . Under this position, the political and societal factors are non considered by technological development. There is so much scholarship in the scientific discipline and engineering surveies ‘ academic subject that the technological developments have been based ( MacKenzie and Wajcman, 1985 ; Bijker et al. , 1987 ) . Although enforcing monetary values to companies for the environmental harm may give force per unit area on it to minimise the charges, we can non guarantee that the company will make so in the countries where imposing charges. ( Rosenberg, 1976, Chapter 23 ) . Using new engineering and means to go through the in other operation parts or go through the cost to the client is more inexpensive and profitable manner o cut down the environmental cost The effectivity of inducement is mostly depending on the sums of subsidy or charge or revenue enhancement is imposed. If it is low, company may non alter their engineering in order to moo the environmental cost, the consequence is small. ( Jacobs, 1993, p. 7 ) . There are many surveies indicated that the inducement is low if the charges are excessively low. ( Postel, 1991, p. 32 ; Stavins and Whitehead, 1992, p. 31 ; Barde and Opschoor, 1994, p. 25 ) Theoretically, there is no account why instruments of statute law failed to make a invention motive to better the public presentation continually. ( Ashford et al. , 1985 ; Caldart and Ryan, 1985 ; Cramer and Zegveld, 1991 ) . For case, Caldart and Ryan ( 1985 ) argument that economic conditions and engineerings are non bound regulative attacks. It is means that the legislative instruments could non promote company to transport out more technological invention in order to alter economic fortunes. Practically, policy shaper rarely take this attack for the similar ground. Because high adequate charges are rarely levied since they are excessively disquieted about reaction of industry. Regulating within the technological model and bing economic is preferred by them. Environmental statute law can curtail the discharges level that should be met and the technological type that should be used such as establishing on attacks of Best Practicable Technology ( BPT ) and Best Available Technology ( BAT ) . It has conventionally been believed that the technological kinds are restricted by policy shapers will harm to innovation activities in the United States. In Australia, policy shapers have non advised company what criterion of engineering should be used. Alternatively, the criterions of discharges have been set which based on the bing engineerings. As a consequence, there is incentive to alter engineering since the criterion is sensible to accomplish, but non merely an environmental end ( Beder, 1989 ) . The cause of neglecting to make inducements through the legislative instruments or monetary value based instruments is same. In both instances, the authorities establishments ‘ strengthens, the politicians ‘ willingness, and the extent of community engagement and review are decisive factors. There are different jobs of policy instruments are listed by J. Rees ( 1988, p. 175 ) : First, the ends of policies are often conflicting, confused and switching. Second, the procedure of execution can non, and does non, running along consistent, clear ends-means lines. Last, the policy instruments are manipulated by the involvement groups within both the regulation governments and the regulated community. Brian Wynne ( 1987, pp. 4-5 ) besides points out the viing involvements ‘ interaction necessity to the standard executions. For illustration, the interested parties are regulated and regulative authorization, authorities and nearby community. It normally includes dialogue, version and via media. Rees suggests that economic mechanisms, advocate slope to do the premise that â€Å" the control system of pollution is chiefly composed of economically rational pollution shapers and enterprisers running without capital, organisational, perceptual and proficient restrictions. This is non the instance. For case, although the cost to change the production procedure or put in pre-treatment equipment may be lower than the charges in the long term, most companies are non willing to put on the initial cost payment. However, there is no pick for companies in the legislative instruments. Rees said that there are several surveies have shown 25 % -30 % polluters do non understand the system of pricing which may hold radically different degrees of payment significantly if the sewerage ‘s volume or strength composing of the discharge can be changed by polluter ( Rees, 1988, p. 184 ) . Many polluters do non cognize how to alter the methods of pollution decrease and seek the most favorable determinations in the involvement of themselves. Cost effectivity and economic efficiency Under the price-based steps, the environmental costs are failed to be internalized and inducement for alteration engineering is less than the legislative steps. Then economic experts argue that price-based steps are more cost-efficient and economically efficient than legislative steps. They point out that the regulative criterion imposes a high cost load on the company and hinders the growing of economic. During 1970s and 1980s, statute law has been characterized by Stavins and Whitehead that costs are non regarded in the execution. They prefer protecting environment by market-based inducement as the other options: aˆÂ ¦the ordinances impact on the economic strength and its competitory ability in international markets are heightened concernedaˆÂ ¦ under the ordinance, behavior is dictated and net income chances are removed. Then, unneeded loads on the economic system are placed and more effectual environmental engineerings are stifled. Economic instruments claim that ordinances are non cost-efficient. It is because the ordinances require emanations from all companies to run into unvarying criterions, but non see whether they have ability to run into them. Installing peculiar pollution control engineerings in the companies are required by the ordinances but there is no consideration on whether the companies affordable for them. Although the ordinances can better the quality of environment, the cost is excessively high. On the other manus, economic instruments are said to allow that concern should portion the pollution control load in an effectual manner. ( Stavins and Whitehead, 1992, p.9 ) The suggestion is come from that the pollution decrease of the some companies are less expensive than others. So, it is sensible expect that these companies reduces more pollution is more effectual than the other companies for whom it would be non cheaply. In this manner, the pollution control ‘s fringy cost is accomplishing an excess unit of pollution decrease ‘s extra cost. So, the concerns ‘ fringy costs of pollution control would be equalized. For case, the rate of pollution discharge fee is aggregate to all companies. The companies will happen that cut downing the pollution discharge is cheaper to pay the fee if the decrease of pollution cost is more than the discharge fee payment. However, in most instances frequently show that, economic instruments save cost are non due to implementing pollution decrease. Jacobs ( 1993 ) point out that the efficiency is an statement of theory but non an empirical one and provides the illustration in the follow: The sewerage charges raised 400 % in Britain. The authorities failed to alter behavior of companies, even parts of pollution control investing would be pay back. It is because the affected companies did non understand the system alteration. The pollution decrease issue is non dealt by applied scientists but the finance section. So the companies did non cognize the available option of engineering. Therefore, it is more efficaciously necessitate the companies to put in better engineering by ordinance. Savage and Hart ( 1993 ) suggest that: â€Å" Efficiency is a major foundation for the rational, the text edition of intermediate economic sciences ‘ fantasy universe: in the market mechanism, coincident imperfectnesss is non constrained the universe, for illustration, imperfect competition or monopolies, outwardnesss, uncertainness, asymmetric information, revenue enhancements, uncomplete markets or moral jeopardies. † Economists frequently argue that determination devising of centralised authorities is less efficient than market. It is because, under the market-based mechanism, information is automatically gathered and the balance of supply and demand is ensured and allotment of resources is expeditiously. Nevertheless, pollution charge suited for this kind of statement because enforceable remains and monitoring are needed. The policy shaper still should cognize the sum of waste are discharged and guarantee that companies have wage for the pollution discharge fee right and have been paid its waste. â€Å" Any environmental control system should be checked by inspectors to do certain that claimed discharge, resource extractions or emanations are right. Therefore, bureaucratic is necessary since they are revenue enhancement inspectors, but non regulative 1s † ( Jacobs, 1993 ) .

A Review of Family to Family: Leaving a Lasting Legacy by Dr Jerry Pipes and Victor Lee Essay

Having two children of his own, Paige and Josh, Dr. Jerry Pipes has written several books dedicated to families and their connection to Christ. These include Building a Successful Family, Becoming Complete and the book being reviewed, Family to Family. Pipes received his B. S. at Texas A & M University, followed by his M. A. at Southwestern and then his D. Min. at Luther Rice Seminary. He is the President of Jerry Pipes Productions, which seeks to â€Å"impact people through cutting edge resources and events† (jerrypipesproductions. com). Pipes has written instructional booklets and training processes that have exceeded 18 million copies. His teachings have spread internationally through his involvement in assemblies, crusades and conferences. According to his website, Pipes most recent trip to the Northcrest Baptist Church in Meridian, MS resulted in over 445 decisions for Christ. Co-author Victor Lee entered full-time ministry in 1995 and is currently the Minister of Single Adults and Evangelism at First Baptist Concord in Knoxville, TN. He has contributed and edited several Christian publications, including special event evangelism material. Lee and his wife, Judy, reside in Wake Forest, NC. Content Summary  The cover of Family to Family shares instantly the book’s purpose: a way for hurried parents to leave a lasting legacy with their children and find true significance in the process. Pipes and Lee have constructed a guide aimed at growing as a family in Christ and sharing that relationship with one’s relatives, community and acquaintances. The introduction explains that the book is not a quick fix but a helpful tool for becoming a healthy, on-mission family. The books definition of family is â€Å"persons related to one another by marriage, blood, or adoption† (p. 9). The first chapter discusses how to become a healthy family in Christ. Sharing shocking statistics concerning the lack of family engagement with one another, the authors instruct one to first examine one’s family. They teach that a healthy family should mirror one that spends quantity and quality time together; one that expresses commitment to one another and to the family as a whole; one that has both parents equally involved in raising the children; one that finds significance in Christ; one that passing the baton of faith to the next generation; one that spends God centered time together (p. 2-15). In order to have a reflection of a healthy family, the authors suggest six spiritual growth principles which include: quiet time, lordship, development of a powerful prayer life, personalization of God’s word, Christian friendships and accountability and development of a ministry (p. 13). Living out God’s purpose of the Great Commission is the framework of a healthy and growing family unit (p. 15). Chapter two focuses on developing a family mission statement. The mission statement serves as a centerline that intentionally submits to the ways of Christ. God’s priorities become the family’s priorities. The mission statement begins with the parents and is passed down to the children. When constructing a family mission statement, the family should consider the mission of Jesus (p. 27). The authors provide several Scriptural references to this mission. They also provide five foundational elements in considering a mission statement: (1) the authority of Jesus; (2) making disciples; (3) comprehensive nature of the call to teach â€Å"all nations;† (4) baptize new believers; (5) the eternal presence of God (p. 28-29). The process of developing a mission statement must be fun and inclusive of all members. The family should consider their goals, take a family inventory and conceptualize and personalize the statement. The authors provide many examples of family mission statements. Since nine in ten people come to Christ before reaching age 25, the authors dedicate chapter three to passing on the baton to the next generation. This requires trust, communication, involvement and discussion. Raising children to become mature in Christ begins with the parents and is fed by the church, not the opposite. The seven key elements to mentoring to children include: modeling, presence, affirmation, praying with and for, transparency, doing things with them and not for them and making one’s actions reflect the Word of God (p. 52-57). The authors give advice on family devotion and family worship (p. 60-63). Chapter four focuses on sharing one’s faith outside of the home. This takes the form of lifestyle evangelism. One is taught how to minister to one’s immediate family, relatives, friends, community, acquaintances and person X (p. 73). Person X is anyone who one will never (most likely) have further contact with. There is also guidance on ministering to special needs children. The authors provide several evangelism ideas for each type of relationship. They discuss ministry evangelism (including the key methods of look, listen and linger), lifestyle evangelism and family evangelism. Chapter five is closely linked to chapter four as it teaches one to go into the church. The authors share that an on-mission, healthy family will make it their effort to spread the Word of God by integrating ministry and the church (p. 7). The book gives an example of how to connect with the community while ministering through the church. It suggests a family block party that has the qualities of being inclusive, intimate, intentional, informal, interesting and imaginative. Pipes and Lee also instruct one to engage in family mission trips at least once every two years. It labels the Jesus Video as an effective and non-confrontational way to share Christ while in the mission field. Chapter six concludes the book as it teaches one to share the message. It stresses the importance of prayer and implements the heart  acronym in association with praying for the lost (H= heart is receptive to gospel, E= spiritual eyes and ears are open to message, A= attitude toward sin matches God’s attitude, R= God releases them to believe, T= trust in Christ to live a transforming life) (p. 105). The authors provide guidance on ministering to individuals where they are in life. They teach that receptivity will come in varying levels. Most importantly chapter six teaches that one is not alone in the mission of sharing the gospel. It also gives many methods to successfully sharing which in turn raises the family to follow the ways of Christ. The conclusion is simplified into one page, challenging the family to step out and respond to the call of evangelism and to be an on-mission family. Evaluation Jerry Pipes and Victor Lee have constructed a book that convinces the reader to mature as a family in the direction of Christ. It’s chapters overflow with logical and structural guidance to reaching this goal. Every section is presented in a categorized manner that is easy to follow. Along with this, the chapters include appropriate and practical examples for the particular lesson being discussed. The most interesting example provided in the book is in chapter six describing how to share the message of Christ. In this example the authors are explaining that one is not alone in the mission of spreading the gospel: After prayer, a man named Chris feels the deep need to be vulnerable and sensitive while sharing his faith. While Chris is on a plane he begins a conversation with a married couple. The couple asks Chris of his profession and he replies that he is involved in a ara-church ministry. In disgust the couple asks why he would do that. He replies in a heart-breaking manner that he, his brother and his best friend were all very depressed. The depression resulted in Chris finding Christ and the brother and best friend committed suicide. The couple is quickly moved to tears because they are on the way to bury their son who has recently committed suicide. This is a powerful story and one full of God’s presence. The authors used the story to show how greatly involved the Holy Spirit is in teaching, guiding and using his followers for the advancement of the Kingdom. The inclusion of examples is a strong point found in Family to Family. The authors also include biblical support throughout the book, stressing the Scriptural references to the Great Commission. Any instruction given is accompanied by biblical command. For example, the authors teach that discovering real purpose in life involves making choices about â€Å"who you are and what you stand for† and reference Joshua 24:15 which states, â€Å"And if it seems evil to you to serve the Lord, choose for yourselves this day whom you will serve . . . But as for me and my house, we will serve the Lord. The author’s main presupposition, that many families do not spend adequate time with one another sharing the Word of God and the love of Christ, is supported with statistical data (i. e. 88 percent of the children who grow up in churches leave the church and never return) (p. 50). Pipes and Lee conclude that by following the suggested guide given in Family to Family, the family unit will be more prepared to have meaningful Christ-filled relationships within and outside of the family, respond to the call of Christ and pass the baton of faith to future generations. It is difficult to point out many flaws within the book. For the purpose of this critique, the only suggestion for improvement would be to tie in the theme of family in a more distinct manner throughout the chapters. At times it seemed that it was geared more toward evangelism rather than the books stated theme of leaving a lasting legacy with children and finding significance along the way. Nonetheless, Family to Family is an appropriate guide for growing in Christ (both individually and as a family). Implementation of its strategies and suggestions may prove to be a beneficial tool to parents and singles. Dr. Pipes has shared his book internationally and has continued to win souls to Christ. Family is an important aspect of life and when molded in the way of the Lord, the family, as a unit, can share the love and knowledge of Christ with the world around them. Salvation becomes a domino effect: family to family.

Monday, July 29, 2019

Please write a good topic for the paper Essay Example | Topics and Well Written Essays - 500 words

Please write a good topic for the paper - Essay Example There are studies which mention that in only a few decades, the population of our city has declined by almost 50% (Rey, 2001). However, I would like to point out that if we compare the population of whites and blacks in our city, we come to know the astonishing fact that the population of blacks has in fact increased in the city, and it is only the white population that has decreased in number (Population of Buffalo, 2005). So, are we only reporting the figures of whites? And if so, why only whites? Is there a hidden interest behind such reports? In my opinion and you can also see around the world that increasing populations are generally a problem. Im surprised to see that it is only in Buffalo city, that decreasing population has appeared as a problem. Less people, less traffic, less need for resources, less troubles and consequently more opportunities and prosperity. But why is it that our media portrays this trend to be negative for our city? Again, is there a hidden interest behind the scenes? A general impression is created that the vast majority of people leaving Buffalo city are young people, and such change is happening due to the loss of jobs in the city. The Buffalo News in 2000 stated that the ranks of elderly were growing stating that 15.9% were older than age 65 in comparison to the national average of 12.4% (Heaney, 2000). However according to 2006 census, this average for Buffalo city is 13.6% (U.S. Census Bureau, 2006). Now are we hiding something? And the next question is, hiding from whom?. Are we afraid to let common people know about the real picture? Why half truth? The citizens of Buffalo need to know the complete truth. In my opinion, we are not in a position to trust the media blindly. We need to do our research prior to believing what media is feeding our brains. The conditions are not hopeless and there is much we can do to improve the condition of our city. We need to understand

Sunday, July 28, 2019

Visual Analysis Survey of Western Art II Essay Example | Topics and Well Written Essays - 1250 words

Visual Analysis Survey of Western Art II - Essay Example The piece of art, Madonna and child are now part of the collection of the Lowe art museum in the University of Miami. Madonna and child is a painting done on a piece of wood thus commonly referred to as tempera on wood. It is believed to have been done toward the 16th century and is approximately 80x60 centimeters. Madonna and the child had been neglected for a few centuries but once discovered it became very expensive. It suddenly rose to twenty two million pounds as per the national scientific department. Lately it is the property of Lowe in the University of Miami after being given off as a gift. Before the 16th century, Italy comprised of many states which spoke different languages, thus a need to stand out was paramount. The Italians soon led the way by speaking about their culture through works of art like paintings. This is how Lorenzo di Credi and other painters and sculptors like Da Vinci, Donatello, Verrocchio, Filipo Bruschnelli and others got famous. The painting Madonna and child talked about Italy’s love and curiosity of religious issues and how they felt about it. It showed that culturally, Italians are a religious nation. The cultural aspect was seen in the technique that most Italian works of art appeared in. for example, Madonna and child was on tempera of wood while other works by other Italian artists were made of oil on wood and such stuff. This article is going to thoroughly survey the piece of art Madonna and child. This it is going to look into from the paintings composition to its characteristics and comparison to other works of art. The composition of the painting of Madonna and child by Lorenzo was due to Italy’s religious passion. During this era Catholicism was widely spread in Italy and its roots were firmly instilled in the people who used sculptures and paintings to bring Christianity and especially Catholicism to reality. Therefore the theme that led to Madonna and child

Saturday, July 27, 2019

Influences in the teaching enviroment Essay Example | Topics and Well Written Essays - 1000 words

Influences in the teaching enviroment - Essay Example This primarily requires the teacher to understand the nature of disruptive behavior and form strategies to deal with such students. A good teacher is one who is prepared for everything and is always ahead of the students. A good teacher will never lose any of his/her audience. They will always maintain a connection and eye contact with the pack to ensure maximum participation (Ryan, & Patrick, 2001). The most common factors which lead to disruptive behavior of students in the classroom are the following The students lose interest in the subject and get bored from the confined environment. When the students lose their focus from the topic they start behaving in a disruptive manner. Misbehavior with the teacher and violating the rules of school by the students is the most common form of disruptive behavior. Students indulge into wrong habits or they feel good by bullying others. Some students deliberately behave badly to get noticed. They do this to get famous amongst their peers. Stud ents who are self centered and like it when people circle them while walking often argue without any reason. They think that they are always right and always have an argument ready. Some students genuinely have a bad behavior and they cannot do anything to control their behavior. Autism and other disorders like the spectrum disorder are observed in such students. This disruptive behavior which the students show affects the teachers and makes it difficult for them to control the class. A good teacher is one who exactly knows the audience they are addressing to. They maintain full contact with all the students and the lecture delivery is such that the students don’t feel bored at any point. For teachers it is said that when the attention of one student is lost the whole class is lost (Kaplan, Gheen, & Midgley, 2002). The classroom environment is very impulsive and volatile. The mood, reaction and the behavior of the students keep on changing. In professional education teachers fail to deliver because of disruption and misconduct in the behavior of the students. Disruptive behavior is like a virus, it spreads throughout the class. If one student misbehaves the whole class gets an urge to misbehave. The examples of disruptive and negative classroom environment behavior are Personal attack by the students either physical or verbal Excessive use of electronic devices in the classroom Leaving of class without the permission of the teacher Sleeping during classes and not paying heed to the teacher Ignoring teachers instructions and arguing with the teacher unnecessarily by showing an aggressive or hostile behavior Bullying the students or the teachers or portraying displeasure through an unacceptable behavior like shouting and arguing unnecessarily (Teaching Academy, n.d) The most efficient strategy which the teachers can use to deal with the disruptive behavior of the students is to ensure that the interest of the students is maintained constantly. The course material must be made interesting and relevancy of the subject must be delivered to the students. New methods and interactive techniques must be proposed to make learning and interaction easy like discussion, games and group activities. A teacher must encourage participation because it is the only way they could get feedback of whether the topic is being delivered to the students or not. This is the technology age and to attract the attention of the students new methods

Friday, July 26, 2019

Indian economy(international business) Essay Example | Topics and Well Written Essays - 500 words

Indian economy(international business) - Essay Example The value of the Indian Rupee against the U.S. Dollar is presently unfavorable. The economic boom of the last few years that had created new markets for the disposable wealth of the affluent middle class is also accompanied by high interest rates. Although the government’s decision to increase interest rates is done to balance the welfare of the poor with the economic boom, it is certainly expected to pull back the rate of growth. The growth rate of around eight percent, seen in the last four years, is bound to fall as a result. This prospect has made Indian business leaders a little uncomfortable. While large corporations can overcome this hurdle by borrowing from other countries, the small and medium scale businesses will undoubtedly suffer. Since the starting of the liberalization and deregulation phase, the country has largely come to depend on private corporations for infrastructure development. The successive governments during the last 15 years have been reluctant to initiate infrastructure projects – as raising taxes would result in unfavorable public opinion. But a country’s economic advancement is inevitable linked to its infrastructure and one cannot manifest without the other. This flaw had already started to expose some limitations. The meager budgetary allocation for highways and railroads had led to a substandard transportation facilities. Government investments in energy, water-treatment and sewage-treatment plants had been disproportionately low. Such a scenario will not lure trans-national companies to set up operations in India as it had done in the past. They may alternatively look east towards countries like China, Taiwan and Philippines that are more advanced in this regard. India’s English speaking elite have been the backbone behind the recent prosperity. It is to this section of the population that many jobs from the United States and Britain are outsourced. But the standard of education had failed to adapt to the

Thursday, July 25, 2019

How To Survive A Plague Movie Review Example | Topics and Well Written Essays - 750 words

How To Survive A Plague - Movie Review Example This paper is a review of the film â€Å"How to Survive a Plague†. It is majorly about a group of activists and the founders of the Act-up organization. This follows a struggle for recognition by the U.S.A government and health organizations in the provision and the development of advanced VIV/AIDS remedies. The political aspect of this documentary is the level of ignorance portrayed by President Ronald Reagan in support of the epidemic activism. In addition, Senator Jesse Helm pointed out to the victims that, â€Å"They asked for it†. Research through interviews and film demonstrations by the members of the Act-up group identified the spread of the plague to have been hastened by homosexuality and GLBT lifestyle. The politics against the Act-up activism was further reflected by the refusal of health organizations to treat AIDS patients and further refusal by the funeral homes to bury those that died of AIDS or related complications. The increasing death troll inspired the mobilization of the GLBT group in 1987 to influence the public perception about HIV/AIDS. This aimed at the arrival of a probable cure or effective preventive measures that would curb the number of deaths from the plague. The controversy within the GLBT and the intensive overview of the organization disagreements on how the campaigns should be carried out sets How to Survive a Plague a unique documentary in comparison to the other HIV/AIDS films. The differing opinions within the GLBT organization were whether the fighting meant the literal fighting while a section of the members acknowledged a campaign without undue violence.

Wednesday, July 24, 2019

Module 5 TD-HRM 401 - Recruitment Essay Example | Topics and Well Written Essays - 250 words

Module 5 TD-HRM 401 - Recruitment - Essay Example Such may include a situation in which an employee has given out some confidential information regarding an investigation into the conduct of an employee by government authorities (Friedman, 2005). Employers need to take several actions that are aimed at minimizing retaliation actions such as coming up with policies against retaliation, proper communication with the employees who are making complaints at a personal level or in a staff meeting, ensuring confidentiality on any complaint that has been raised by the employees as well as proper documentation of any complaint brought up by the employees. Employers need to further offer training to the employee so as to make them to clearly understand what actions constitutes retaliation and how to respond when such occurs. The trainings should be aimed at offering counselling opportunities to the affected employees so as to boost their morale even as they strive to make their genuine concerns to be

Equality and Diversity Essay Example | Topics and Well Written Essays - 2000 words - 1

Equality and Diversity - Essay Example Overview of Issues and Definitions: Although there are currently many definitions of what diversity ultimately means, for purposes of this brief analysis, it will be defined as the extent and level to which the organization/entity in question is able to effectively represent the realities of the environment within which it operates (Kellers 154). Ultimately, such a definition implies that diversity in an of itself should be a means by which the organization seeks to reflect the racial, ethnic, and religious realities of both the market that it seeks to compete within and the population that it draws from stop in such a way, such a broad definition allows for this level of diversity not only impact upon the way in which healthcare provision is conducted within a particular region but also have far-reaching applications with regards to how individuals interact with and represent those populations with which they seek to provide healthcare solutions for (Ibrahim 3). Analysis of the NHS and Available Mechanisms/Legislation to Reduce Ageism/Discrimination and Promote Equality As the complexity of the nursing world has only increased, so too has the level of competition and demands that are exhibited on providers throughout the market. This pressure coalesces into forcing these providers to seek to cut costs in almost each and every identifiable manner (Higgins 15). Not surprisingly, one of the main determinants for why age discrimination takes place within the current environment has to do with the fact that providers are able to save a great deal of money by forcing out more seasoned, experienced, and expensive individuals to be replaced by younger and cheaper overhead costs (Kmietowicz 994). Alternately, even those individuals who have not yet been employed are oftentimes passed over due to the fact that the employer determines that they will likely command a higher price than their younger counterparts. Even though such discrimination is ultimately illegal, the f act of the matter is that it is oftentimes impossible to prove; thereby encouraging some to engage tacitly in the practice in the knowledge that they will not likely be caught and in the hopes of garnering a further level of profitability in the future (Hossen & Westhues 1090). Another core rational that individuals within the healthcare profession oftentimes engage as a means of discriminating against an older a demographic is with regards to the financial cost that these individuals are likely to incur with respect to increased absences and/or health insurance reasons (Briscoe 9). Naturally, the same concerns oftentimes contribute to discriminating against women; due to the belief that women will be more likely to be absent; attending to their sick children, on maternity leave, or generally being predisposed to being caregivers in a number of different situations. Naturally, the veracity of all of these beliefs is subject to a great deal of debate; however, the point of this analy sis is not to point to whether or

Tuesday, July 23, 2019

Mechanical Measurements Essay Example | Topics and Well Written Essays - 1000 words

Mechanical Measurements - Essay Example This technique is also sensitive to extreme temperatures because it affects the transition of the luminescent molecules (Mantel 2005). The PSP material is also used to develop coat for self adhesive tape and decals. This approach is preferable as it’s quick and easy during experimentation. It’s also reduces the time consumed for surface preparation as well as cost minimization (Mantel 2005). Pressure sensitive paint technique is also used in wind tunnels for pressure management. Due to the ability of this technique to provide field management, it produces global surface maps with better resolutions. Producing high quality global surface maps however requires a clear understanding of the e internal mechanisms of the technique as well as the functions, properties and experimental setup (Sullivan & Liu 2004). This technique has been found to be easier, more accurate faster and cost effective as compared to the use of pressure ports and computational fluid dynamics for exte rnal pressure management. ... mage illustrates the parts of the pressure sensitive paint device The pressure sensitive paint is made up of luminescent molecules that are distributed in an oxygen permeable polymer binder. When the PSP is exposed to ultraviolet rays, it causes the luminescent molecules to gain a higher energy state. In this state the molecules can decay is several ways such as shifting the energy to the polymer binder, conflicting with the oxygen molecules on the PSP surface and discharging the light (Mantel 2005). The luminescent molecules are sensitive to oxygen molecules therefore when they collide with the oxygen molecules, they react to produce light. The produced light is inversely proportional to the amount of oxygen molecules available on the surface. Most of the collisions between the luminescent molecules and oxygen occur when the PSP is under a lot of pressure, therefore the amount of light emitted is inversely proportional to the pressure on the surface. Hence the pressure on the surfac e is easily calculated from the amount of light emitted. However, the challenge with the luminescent molecules transferring the energy to the polymer binder is that the energy transfer requires increase in temperature therefore causing the temperature on the PSP surface to rise (Mantel 2005). This affects the ability of the luminescent molecules to react with oxygen and could lead to release of inaccurate results. The effects of the temperature rise are slight because in most cases the temperature rises by a few degrees. The use of PSP could also be challenging because of the nature of experiments and high level of sensitivity. Fig 2 schematic diagram of pressure-sensitive paint measurement system The  Moire?  technique  for  stress/strain  analysis The moire technique is used to determine

Monday, July 22, 2019

Personal Responsibility vs Corporate Responsibility Essay Example for Free

Personal Responsibility vs Corporate Responsibility Essay The question has been asked before and I believe it will continue to be asked for many years to come†¦what is the difference between personal responsibility and corporate responsibility? According to business dictionary personal responsibility is The obligation of an organizations management towards the welfare and interests of the society in which it operates. A company by the name of Symantec, defines corporate responsibility as the way in which we operate with full attention to and respect for ethics, the environment, and a commitment to positive social impact. There are moments where it’s hard to distinguish. It doesn’t seem like there is a clear line distinguishing the differences†¦at least to some. It’s our responsibility to make the best decisions possible for ourselves. Nobody is going to know what’s best for us than ourselves. In the movie Supersize me it goes to prove that the food industry is not going to make the decisions that are in our best interest. McDonalds in my eyes is one of those companies that is seems to be more interested in the profit being made by selling big macs than helping the â€Å"fat kid† become healthier so he can stop getting bullied by other children. I don’t believe it’s the industries responsibility to educate us about what we choose to eat. That’s our individual responsibility. We need to educate ourselves and decide what type of lifestyle we want to live. I believe part of the issue is the fact that our society can’t always take responsibility for their actions. Of course if you eat fast food 3 times a day you’re going to get fat and sick. Most people know that†¦or at least reasonable people. The daily recommended caloric intake is 2,000 calories. Let’s take a closer look†¦let’s say you order a double cheese burger 440 calories, medium fries 380 calories, oh and let’s not forget your medium coke at 210 calories. That’s a total of 1,030 calories. That’s more than half of the recommended intake! Another thing to keep in mind is the fact that the recommended caloric intake of 2,000 caloric intake is not for everyone. Caloric intake is based on a person’s weight and height. I know personally I’m not big enough to be able to consume 2,000 calories and not gain weight. We need to remember that we need to eat to live and shouldn’t live to eat. I believe that if enough people educate themselves†¦ourselves about how to properly eat a balanced meal then there would be a greater demand for the corporate industry to give us what we need. What I personally do think the food industry does need to do is change the way they do business.

Sunday, July 21, 2019

Studying The Future Prospective Of Nanotechnology Computer Science Essay

Studying The Future Prospective Of Nanotechnology Computer Science Essay This paper explores the present impact of nanotechnology on the consumer market. It situates the technical aspects of nanotechnology and describes some early successes of nanomaterials embraced. It includes a description of technology developments in the area of automotive industry, biomedicine, household appliances, nanowires, nanotubes, nanobubble, nanochips, healthcare and numerous other nanostructured materials with a brief description of the number of research and development activities that are in various stages of testing and qualification. II. INTRODUCTION Nanotechnology is derived from the combination of two words Nano and Technology. Nano means very small or miniature. So, Nanotechnology is the technology in miniature form. It is the combination of Bio- technology, Chemistry, Physics and Bio-informatics, et Nanotechnology is a generic term used to describe the applications that work with matter so small that it exists in the molecular and atomic realm. As the name indicates, the fundamental unit in any nanotechnology system is a nanometer, nm, which is one billionth part of a meter. Nanotechnology research shows that at such micro level, the physical, chemical and biological properties of materials are different from what they were at large scale. Nanotechnology originated in India around 16 years back. This new sphere of scientific innovation has a broader scope. Several Indian institutes have introduced degree courses in Nanotechnology at both the UG and PG levels. The areas covered in the Nanotech are Food and Beverage, Bio- Techn ology, Forensic Sciences, Genetics, Space Research, Environment industry, Medicine, Agriculture and Teaching. The fundamental idea is to harness these altered and often improved properties to develop materials, devices and systems that are superior to the existing products. For instance, breaking a material down into nanoparticles allows it to be rebuilt atom by atom, often improving material strength and decreasing weight and dimensions. Based on this concept, researchers have been able to develop a myriad of nanomaterials with amazing properties. The Council of Scientific and Industrial Research, also known as CSIR has set up 38 laboratories in India dedicated to research in Nanotechnology. This technology will be used in diagnostic kits, improved water filters and sensors and drug delivery. The research is being conducted on using it to reduce pollution emitted by the vehicles .Looking at the progressive prospects of Nanotechnology in India, Nanobiosym Inc., a US-based leading nanotechnology firm is planning to set up Indias first integrated nanotechnology and biomedicine technology park in Himachal Pradesh. Nanotechnology has certainly acquired. In the long term scenario, nanotechnology promises to make revolutionary advances in a variety of fields. Possible uses of nanomaterials may include the cleaning of heavily polluted sites, more effective diagnosis and treatment of cancer, cleaner manufacturing methods and much smaller and more powerful computers. III CORE CHAPTERS A. History The first use of the concepts found in nano-technology (but pre-dating use of that name) was in Theres Plenty of Room at the Bottom, a talk given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, and so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and vander waals attraction would become increasingly more significant, etc. This basic idea appeared plausible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. The term nanotechnology was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper as follows Nano-technology mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule. In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology (1986) and Nanosystems: Molecular Machinery, Manufacturing, and Computation, and so the term acquired its current sense. Engines of Creation: The Coming Era of Nanotechnology is considered the first book on the topic of nanotechnology. Nanotechnology and nanoscience got started in the early 1980s with two major developments; the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes i n 1985 and carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals was studied; this led to a fast increasing number of metal and metal oxide nanoparticles and quantum dots. The atomic force microscope (AFM or SFM) was invented six years after the STM was invented. In 2000, the United States National Nanotechnology Initiative was founded to coordinate Federal nanotechnology research and development and is evaluated. http://upload.wikimedia.org/wikipedia/commons/thumb/4/41/C60a.png/175px-C60a.png Fig.1. Buckminsterfullerene C60, also known as the buckyball, is a representative member of the carbon structures known as fullerenes and is a major subject of research in nanotechnology. B. Current Research Nanomaterials field includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions. Interface and colloid science has given rise to many materials which may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related also to nanoionics and nanoelectronics. Nanoscale materials can also be used for bulk applications; most present commercial applications of nanotechnology are of this flavor. Progress has been made in using these materials for medical applications; see Nanomedicine. Nanoscale materials are sometimes used in solar cells which combats the cost of traditional Silicon solar cell. Development of applications incorporating semiconductor nanoparticles to be used in the next generation of products, such as display technology, lighting, solar cells and biological imaging; see quantum dots. 1) Top-down Approaches: These seek to create smaller devices by using larger ones to direct their assembly.Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are now capable of creating features smaller than 100  nm, falling under the definition of nanotechnology. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition 2) Bottom-up Approaches: These seek to arrange smaller components into more complex assemblies.DNA nanotechnology utilizes the specificity of Watson-Crick basepairing to construct well-defined structures out of DNA and other nucleic acids. Approaches from the field of classical chemical synthesis also aim at designing molecules with well-defined shape (e.g. bis -peptides). More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognitionin particular, to cause single-molecule components to automatically arrange themselves into some useful conformation. Peter Grà ¼nberg and Albert Fert received the Nobel Prize in Physics in 2007 for their discovery of Giant magnetoresistance and contributions to the field of spintronics. Solid-state techniques can also be used to create devices known as nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS. Atomic force microscope tips can be used as a nan oscale write head to deposit a chemical upon a surface in a desired pattern in a process called dip pen nanolithography. This fits into the larger subfield of nanolithography. Focused ion beams can directly remove material, or even deposit material when suitable pre-cursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100  nm sections of material for analysis in Transmission electron microscopy. 3) Functional Approaches: These seek to develop components of a desired functionality without regard to how they might be assembled.Molecular electronics seeks to develop molecules with useful electronic properties. These could then be used as single-molecule components in a nanoelectronic device. For an example see rotaxane. Synthetic chemical methods can also be used to create synthetic molecular motors, such as in a so-called nanocar. 4) Biomimetic Approaches: Bionics or biomimicry seeks to apply biological methods and systems found in nature, to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied.Bionanotechnology the use of biomolecules for applications in nanotechnology, including use of viruses. C. Tools and Techniques A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in a phonograph but on a much smaller scale. A laser beam reflects off the backside of the cantilever into a set of photodetectors, allowing the deflection to be measured and assembled into an image of the surface. There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy, all flowing from the ideas of the scanning confocal microscope developed by Marvin Minsky in 1961 and the eloped by Calvin Quate and coworkers in the 1970s, that made it possible to see structures at the nanoscale. The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning-positioning mescanning acoustic microscope (SAM) dev thodology suggested by Rostislav Lapshin appears to be a promising way to implement these nanomanipulations in automatic mode. However, this is still a slow process because of low scanning velocity of the microscope. Various techniques of nanolithography such as optical lithography, X-ray lithography dip pen nanolithography, electron bea m lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern. The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning-positioning approach, atoms can be moved around on a surface with scanning probe microscopy techniques. At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation. D. Nanotechnologys Future Over the next two decades, this new field for controlling the properties of matter will rise to prominence through four evolutionary stages. Today nanotechnology is still in a formative phasenot unlike the condition of computer science in the 1960s or biotechnology in the 1980s. Yet it is maturing rapidly. Between 1997 and 2005, investment in nanotech research and development by governments around the world soared from $432 million to about $4.1 billion, and corresponding industry investment exceeded that of governments by 2005. By 2015, products incorporating nanotech will contribute approximately $1 trillion to the global economy. About two million workers will be employed in nanotech industries, and three times that many will have supporting jobs. Descriptions of nanotech typically characterize it purely in terms of the minute size of the physical features with which it is concernedassemblies between the size of an atom and about 100 molecular diameters. That depiction makes it sound as though nanotech is merely looking to use infinitely smaller parts than conventional engineering. But at this scale, rearranging the atoms and molecules leads to new properties. One sees a transition between the fixed behavior of individual atoms and molecules and the adjustable behavior of collectives. Thus, nanotechnology might better be viewed as the application of quantum theory and other nano-specific phenomena to fundamentally control the properties and behavior of matter. Over the next couple of decades, nanotech will evolve through four overlapping stages of industrial prototyping and early commercialization. The first one, which began after 2000, involves the development of passive nanostructures: materials with steady structures and functions, often used as parts of a product. These can be as modest as the particles of zinc oxide in sunscreens, but they can also be reinforcing fibers in new composites or carbon nanotube wires in ultra miniaturized electronics. The second stage, which began in 2005, focuses on active nanostructures that change their size, shape, conductivity or other properties during use. New drug-delivery particles could release therapeutic molecules in the body only after they reached their targeted diseased tissues. Electronic components such as transistors and amplifiers with adaptive functions could be reduced to single, complex molecules. Starting around 2010, workers will cultivate expertise with systems of nanostructures, directing large numbers of intricate components to specified ends. One application could involve the guided self-assembly of nanoelectronic components into three-dimensional circuits and whole devices. Medicine could employ such systems to improve the tissue compatibility of implants, or to create scaffolds for tissue regeneration, or perhaps even to build artificial organs. After 2015-2020, the field will expand to include molecular nanosystemsheterogeneous networks in which molecules and supramolecular structures serve as distinct devices. The proteins inside cells work together this way, but whereas biological systems are water-based and markedly temperature-sensitive, these molecular nanosystems will be able to operate in a far wider range of environments and should be much faster. Computers and robots could be reduced to extraordinarily small sizes. Medical applications might be as ambitious as new types of genetic therapies and antiaging treatments. New interfaces linking people directly to electronics could change telecommunications. Over time, therefore, nanotechnology should benefit every industrial sector and health care field. It should also help the environment through more efficient use of resources and better methods of pollution control. Nanotech does, however, pose new challenges to risk governance as well. Internationally, more needs to be done to collect the scientific information needed to resolve the ambiguities and to install the proper regulatory oversight. Helping the public to perceive nanotech soberly in a big picture that retains human values and quality of life will also be essential for this powerful new discipline to live up to its astonishing potential. Drastic advancements have been encountered in the fields of electronics, medicines, science, fabrication and computational related to nanotechnology. The details are as below. 1)Future of Nanoelectronics: The recent progress of nanoelectronic devices has revealed many novel devices under consideration. Even though some devices have achieved experimental results comparable with some of the best silicon FETs, these devices have yet to show electrical characteristics beyond the basic, functional level. In several years from now, the planar MOSFET, combined with high-k dielectric and coupled with strained layer technology, is expected to maintain its domination the market, due to the fact that the manufacturers still attempt to exploit their existing manufacturing capabilities and seem reluctant to adopt new technology. However, the double- and multi-gate MOSFET scaling is superior to recent planar MOSFET and also to UTB FD MOSFET scaling, thus the double and multi-gate device is projected as the ultimate MOSFET. The role of double gate MOSFET and non-planar will take greater share, as this technology become mature and the risk are more understandable in near future. On the other hand, several issues on fabrication in adoption route to standard fabrication have to be solved for every other technology. Figure indicates the projection for the first year of full scale production for future nanoelectronic devices by ITRS, which reflect the degree of complexity in fabrication for each technology. New MOSFET structures, starting with UTB-SOI MOSFETs and followed by multi-gate MOSFETs, will be implemented soon. The next generation devices, e.g. carbon nanotubes, graphene, spin transistor etc are promising, due to their performances shown by many researches. However, the processing issues force them to take longer step to be main devices for nanoelectronics. . http://docsdrive.com/images/ansinet/jas/2010/fig8-2k10-2136-2146.gif Fig.2. Projection for the first year of full scale production for future nanoelectronic devices. Nanochips: Currently available microprocessors use resolutions as small as 32 nm. Houses up to a billion transistors in a single chip. MEMS based nanochips have future capability of 2 nm cell leading to 1TB memory per chip. C:UserssudshresDesktopnanochip.jpg Fig.3 A MEMS based nanochip Nanoelectromechanical (NEMS) Sensor in Nanophotonic systems work with light signals vs. electrical signals in electronic systems. Enable parallel processing that means higher computing capability in a smaller chip. Enable realization of optical systems on semiconductor chip. Fig.4. A silicon processor featuring on-chip nanophotonic network Fuel cells use hydrogen and air as fuels and produce water as by product. The technology uses a nanomaterial membrane to produce electricity. C:UserssudshresDesktoppem fuel cell energysolutioncenter.org.jpg Fig.5. Schematic of a fuel cell C:UserssudshresDesktopfuel cell fuel cell economy-com.gif Fig.6. 500W fuel cell Nanoscale materials have feature size less than 100 nm utilized in nanoscale structures, devices and systems. Nanoparticles and Structures C:UserssudshresDesktopgold nano particle 1 nano.gov.uk.jpg Fig.7. Gold nanoparticles C:UserssudshresDesktopNano picturesNSF silver nanoparticles.tif Fig.8. Silver Nanoparticles C:UserssudshresDesktopstm2.jpg Fig.9. A stadium shaped quantum corral made by positioning iron atoms on a copper surface C:UserssudshresDesktopnanoboquet nsf.gov.jpg Fig.10. A 3-dimensional nanostructure grown by controlled nucleation of Silicon-carbide nanowires on Gallium catalyst particles. C:UserssudshresDesktopflexible nano wire solar cell.jpg Fig.11. Nanowire Solar Cell: The nanowires create a  surface that is able to absorb more sunlight than a flat surface. 2) Nanotubes: Carbon nanotubes since their discovery are used as the building blocks in various nanotechnology applications. Although many applications are at preliminary stages of experimentation, carbon nanotubes has many future prospects in almost all spheres of electronics applications. Highly integrated circuit is one of the areas, where many researchers are focusing the research and electronic properties of carbon nanotubes are being exploited. Researchers have identified and fabricated the electronic devices having densities ten thousand times greater than the present day microelectronics. These technologies will either complement or replace the CMOS. Further the electronic devices based on carbon nanotubes have additional and advance features such as conductivity, current carrying capacity and electromigration. Semi conducting carbon nanotubes having excellent nobilities and semiconductancies have been prepared and these are far better than the conventional semi conductors. Actually there are some major barriers for developing highly integrated circuits such as present fabrication methods produces the mixture of metallic and semiconductor nanotubes and exact electronic arrangements within a semiconductor nanoube is poorly understood. These are therefore the hurdles in manufacturing and fabricating highly integrating circuits, however continuous research in this area will lead to new and much more advance technology that will not only able to overcome from these barriers but will also open the door for new electronic applications also. C:UserssudshresDesktopmr340083.f7-SnO2-TiO2 composite nanoribbon.jpeg Fig.12 Nanotube 3) Future of Nanomedicine: Nanomedicine is the application of nanotechnology in medicine, including to cure diseases and repair damaged tissues such as bone, muscle, and nerve. To develop cure for traditionally incurable diseases (e.g. cancer) through the utilization of nanotechnology and provide more effective cure with fewer side effects by means of targeted drug delivery systems.Nanotechnology is beginning to change the scaleand methods of vascular imaging and drug delivery. NanomedicineInitiatives envisage that nanoscale technologies willbegin yielding more medical benefits within the next10 years. This includes the development of nanoscalelaboratory-based diagnostic and drug discovery platform devices such as nanoscale cantilevers for chemicalforce microscopes, microchip devices, nanopore sequencing, etc. The National Cancer Institute has related programs too,with the goal of producing nanometer scale multifunctionalentities that can diagnose, deliver therapeuticagents, and monitor cancer treatment progress. These include design and engineering of targeted contrast agents that improve the resolution of cancer cells to the single cell level, and nanodevices capable of addressing the biological and evolutionary diversity of the multiple cancer cells that make up a tumor within an individual. Thus, for the full in vivo potential of nanotechnology in targeted imaging and drug delivery to be realized, nanocarriers have to get smarter. Pertinent to realizing this promise is a clear understanding of both physicochemical and physiological processes. These form the basis of complex interactions inherent to the fingerprint of a nanovehicle and its microenvironment. extracellular and intracellular drug release rates in different pathologies, interaction with biological milieu, such as opsonizati on, and other barriers enroute to the target site, be it anatomical, physiological, immunological or biochemical, and exploitation of opportunities offered by disease states (e.g., tissuespecific receptor expression and escape routes from the vasculature). There are numerous examples of disease-fighting strategies in the literature, using nanoparticles. Often, particularly in the case of cancer therapies, drug delivery properties are combined with imaging technologies, so that cancer cells can be visually located while undergoing treatment. The predominant strategy is to target specific cells by linking antigens or other biosensors (e.g. RNA strands) to the surface of the nanoparticles that detect specialized properties of the cell walls. Once the target cell has been identified, the nanoparticles will adhere to the cell surface, or enter the cell, via a specially designed mechanism, and deliver its payload. One the drug is delivered, if the nanoparticle is also an imaging agent, doctors can follow its progress and the distribution of the cancer cell is known. Such specific targeting and detection will aid in treating late-phase metastasized cancers and hard-to-reach tumors and give indications of the spread of those and other diseases. It also prolongs the life of certain drugs that have been found to last longer inside a nanoparticle than when the tumor was directly injected, since often drugs that have been injected into a tumor diffuse away before effectively killing the tumor cells. 4) Future of Nanoscience: Without carbon, life cannot exist, the saying goes, and not only life. For technological development, carbon was the ultimate material of the 19th century. It allowed the beginnings of the industrial revolution, enabling the rise of the steel and chemical industries, it made the railways run, and it played a major role in the development of naval transportation. Silicon, another very interesting material which makes up a quarter of the earths crust, became the material of the 20th century in its turn. It gave us the development of high performance electronics and photovoltaics with large fields of applications and played a pivotal role in the evolution of computer technology. The increased device performance of information and data processing systems is changing our lives on a daily basis, producing scientific innovations for a new industrial era. However, success breeds its own problems, and there is ever more data to be handled-which requires a nanoscience approach. This cluster aims to address various aspects, prospects and challenges in this area of great interest for all our futures. Carbon exists in various allotropic forms that are intensively investigated for their unusual and fascinating properties, from both fundamental and applied points of view. Among them, the sp2 (fullerenes, nanotubes and graphene) and sp3 (diamond) bonding configurations are of special interest since they have outstanding and, in some cases, unsurpassed properties compared to other materials. These properties include very high mechanical resistance, very high hardness, high resistance to radiation damage, high thermal conductivity, biocompatibility and superconductivity. Graphene, for example, possesses very uncommon electronic structure and a high carrier mobility, with charge carriers of zero mass moving at constant velocity, just like photons. All these characteristics have put carbon and carbon-related nanomaterials in the spotlight of science and technology research. The main challenges for future understanding include i) material growth, ii) fundamental properties, and iii) devel oping advanced applications. Carbon nanoparticles and nanotubes, graphene, nano-diamond and films address the most current aspects and issues related to their fundamental and outstanding properties, and describe various classes of high-tech applications based on these promising materials. Future prospects, difficulties and challenges are addressed. Important issues include growth, morphology, atomic and electronic structure, transport properties, superconductivity, doping, nanochemistry using hydrogen, chemical and bio-sensors, and bio-imaging, allowing readers to evalate this very interesting topic and draw perspectives for the future. E. Foreign Prospect of Nanotechnology Nanotechnology provides a significant opportunity to address global challenges. This is leading to intense global competition to commercialise different products enabled by nanotechnology. However, UK industry is well placed to capitalise on this opportunity and participate in the development of many new products and services by operating alone or in collaboration with international partners. Success in this area will lead to growth in employment and wealth creation. Today, nanotechnology is evolving with some mature products and many in the growth and developmental stage. This is not unlike the condition of computer science in the 1960s or biotechnology in the 1980s. Nanotechnology has been applied to the development of products and processes across many industries particularly over the past ten years. Products are now available in markets ranging from consumer products through medical products to plastics and coatings and electronics products. There have been various market reports estimating the scale of potential future value for products that are nanotechnology enabled. A report from Lux Research published in 2006 entitled The Nanotech Report 4th Edition, notes that nanotechnology was incorporated into more than $30 billion in manufactured goods in 2005. The projection is that in 2014, $2.6 trillion in manufactured goods will incorporate nanotechnology. Even if this is an over-estimate, it is clear that there is a vast market available for nanotechnology based products. It is extremely important to the UK economy that UK companies engaged in nanotechnology participate at each stage of the supply chain. While companies are moving speedily to develop further and more advanced products based on nanotechnology, they are becoming increasingly aware that there are many challenges to address. It was with this background that a Mini Innovation and Growth Team (Mini-IGT) was formed comprising members of the NanoKTN and the Materials KTN as the secretariat, together with members of the Chemistry Innovation KTN and the Sensors and Instrumentation KTN, to prepare a report on nanotechnology on behalf of UK industry. A questionnaire was sent to the members of the various KTNs to solicit feedback on their views on nanotechnology focussing on their commercial position and also their concerns and issues. While the UK Government has commissioned reports and provided responses over the past decade, in the field of nanotechnology, the UK has not articulated an overarching national strategy on nanotechnology that can rank alongside those from the likes of the US and Germany. It is intended that this report, with its unique industry led views on nanotechnology, together with other strategic documents, including the Nanoscale Technologies Strategy 2009-2012 produced by the Technology Str ategy Board, will provide a significant contribution to a future UK Government Strategy on Nanotechnology. Nanotechnology is the basis for many products that are in common use and is providing the capability to produce a very wide range of new products that will become commonplace in the near future. The UK, like many other countries, has invested heavily in nanotechnology and has considered, through a series of reports and Government responses, how to manage and fund nanotechnology developments. At the third meeting of the Ministerial Group on Nanotechnology it was agreed that a nanotechnology strategy should be developed for the UK. As part of the strategy development process, Lord Drayson launched an evidence gathering website on 7th July 2009. Alongside this, four Knowledge Transfer Networks (Nanotechnology, Materials, Chemistry Innovation and Sensors Instrumentation) with significant industrial interest in nanotechnology agreed that it was necessary for industry to contribute to policy development using the bottom up approach. It is intended that this report with its unique industry led views on nanotechnology will provide a significant contribution to a future overarching UK Government Strategy on Nanotechnology, alongside other input from inter alia the Technology Strategy Board and the Research Councils. In addition to the questionnaire, feedback was sought from industry at workshop discussions with invited industry leaders and others in the field of nanote

National Innovation System Concept

National Innovation System Concept In a globalising world, is there any value in the concept of a â€Å"National Innovation System†? INTRODUCTION The progressive advancements in the different scientific fields and their applications in technology have become one of the most important corner stones for any nation’s wealth and economic growth. For technology and scientific research to be successful in all aspects, including the organisation and the collaboration between the different players in each technological camp, different governments and public and private organizations reached the conclusion that a whole structure of communication and cooperation should be established in order to reach the desired successes in what concerns research, development and the technological objectives that are ultimately the driving force for any economy and societal well-being within a state. One of the most important problems facing the policy making process was the lack of information regarding specific fields and the lack of knowledge in other fields. The need to have a certain kind of a long and constructive relationship between scientists and the technology specialists, on one side, and the policy makers, on the other, became more evident in the twentieth century as technological advancements (in all industrial fields and in sectors related to information technology) grew in extremely high speeds and in extremely high amounts. A stable and continuous flow of information concerning the ongoing changes that were (and still are) taking place in the research and development arena had to be maintained. This gave birth to the concept of National Innovation Systems which, in theory, should be the solution to the above mentioned problem. The idea behind the concept that was evolving is thoroughly explained by Mytelka as she stated that The 1970s and 1980s marked the passage from an era in which technological change was mainly incremental. Time was available to either amortize heavy tangible and intangible investments in new products and processes, or to catch up with a slowly moving technological frontier by mastering processes of production and distribution for what were relatively stable products. Protected national environments were both a blessing and a curse in that earlier period, since they provided time and space for infant industries to emerge but frequently little incentive for them to become competitive whether at home or abroad. At the same time, within the markets of developing countries, high levels of protection created the potential for oligopolistic market behavior by large, mainly foreign firms, which raised prices to local consumers and made exporting difficult. (15) National Innovation Systems The concept of ‘National Innovation System’ appeared as a prospective response to the necessity of having clear policies that shape the work and the interconnectedness between research, organisations, industries and governments in regards to science and technology research and the products that are expected to be received from that research. An innovation system is the result of the processes of research and development in any science and technology related field. In this context, we can understand that the innovation system involves the distribution, or spreading, of the needed information and knowledge bases regarding a given technology between the various entities that require having them. This should cover the governmental organisations, the interested centres of research, the universities, the industries and even the individuals. The need to create innovation systems on national levels became important in the 1970s and the 1980s. This is explained by Nelson and Rosenberg as they state the following: The slowdown of growth since the early 1970s in all of the advanced industrial nations, the rise of Japan as a major economic and technological power, the relative decline of the United States, and widespread concerns in Europe about being behind both have led to a rash of writing and policy concerned with supporting the technical innovative prowess of national firms. At the same time, the enhanced technical sophistication of Korea, Taiwan, and other NICs (Newly Industrialized Countries) has broadened the range of nations whose firms are competitive players in fields that used to be the preserve of a few and has led other nations who today have a weak manufacturing sector to wonder how they might emulate the performance of the successful NICs. There clearly is a new spirit of what might be called technonationalism in the air, combining a strong belief that the technological capabilities of a nations firms are a key source of their competitive prowess, with a belief that these capabil ities are in a sense national, and can be built by national action. (Nelson 3) It is evident that the concept was originally created in order to give more advantageous steps to science and technology related entities in what concerns competitiveness and the ability to survive and grow both inside the borders of the country itself and as a strong product export bridge to other countries. The main objective in this regard is economical; each country is required to establish the most suitable environment for scientific research and technological structures to flourish and, by doing so, to strengthen the economy of the country and the living standards of its people. The National Research Council defines ‘National Innovation System’ by stating that it â€Å"refers to the collection of institutions and policies that affect the creation, development, commercialization, and adoption of new technologies within an economy† (105). Another definition is that â€Å"the National Innovation System is a systemic model that shows dynamic interactions and pattern of processes that facilitate technology flow in the system, incorporating variables and players from all directions that affect the innovation process† (Hulsink 16). It must be noted here that the above mentioned process should contain, within it, all the elements leading to influence the whole technological sector within a country and this is specifically why there should be clear policies and laws regulating the way in which the system should function and how it should present the required results. Factors leading to the creation of a successful national innovation system are presented by Biegelbauer and Borras: â€Å"A national innovation system is a whole set of factors influencing the development and utilisation of new knowledge and know-how†. The authors emphasise the fact that education is an important element in the process of creating and implementing the system in question (84). For a national innovation system to be structured correctly, a thorough and comprehensive analysis should be performed on a national scale; this is because the system should be able to determine which elements are needed for growth and which policies are the most adequate. â€Å"National profiles are too complex and diverse to derive a unified representation of the system, posing the problem of defining and modelling the NIS. One useful way to deal with heterogeneous profiles of NISs is a taxonomic approach where national systems are classified into several categories† such as â€Å"large high-income countries, smaller high-income countries, and lower-income countries† or â€Å"large/rich countries, small/rich countries, and developing countries† (Park, Y. and Park, G. 403-404). According to the Organisation for Economic Co-operation and Development, there are different policy making problems in what concerns the operational side of the national innovation system. â€Å"In General, the attention of policy makers moved away from an overall priority to fund the RD input to the economy, with additions along the way to the market to enhance technology transfer† and a special care was given in what concerns encouraging the collaboration and the methods of networked work and â€Å"the flows of knowledge into spin-offs and industrial use, institutional change, entrepreneurship, and improved market oriented financial systems† (14-15). The document of the Organisation for Economic Co-operation and Development also explains that policy makers should take important factors into consideration, such as the relations and inter-dependences between a variety of market sectors, such as labour, capital, and product markets because they are the source of innovation and growth. Another important factor is that policies should also cover sectors that are not considered as related to markets, this can include partnerships in research and development activities (16). The policies in what concerns the system in question, for it to be successful on a national level, should take into consideration a variety of elements and keep them under continuous scrutiny. These elements include the amount and the quality of the performed innovation, the continuous growth in manpower (for what concerns the technological production process) and in the population (in what concerns the use of the produce), the level of growth of the economy itself with all what comes with that concerning new challenges in regards to raw materials and the human factor, the ability of firms to move from one sector into the other, according to the changes in scientific and technological advancements, independently. This creates a huge amount of work for policy makers and scientists and technology experts alike in order to keep policies efficient and effective, on one hand, and in continuous evolution and change, on the other, according to the changes on the ground and according to the changes forced by outside factors. National Innovation Systems Globalisation As clear from the concept’s name itself, the most important point to note is that it was created, and originally thought of, around the concept of a limited political and geographic entity; the country. It focuses on the ‘national’ aspect of the economical, scientific and technological sectors. In today’s world, that is certainly different from that of the 1950s and the 1960s, many changes have occurred that transformed our lives because of the tremendous advancements in science and its direct applications in technology; this includes the way we make business, the way we create products and offer new services, the way the manufacturing processes of certain products take place, and the way information and knowledge are being distributed and reached. It is now more obvious than any point in time in the past that a national system in relation to science, technology, research and industry, no matter how policies are accurately prepared and implemented, cannot survive if the international (or the global) element is not taken into consideration and if it is not dealt with adequately. Much less agreement exists†¦ on how precisely globalization and innovation interact, and what this implies for industrial dynamics and a policy-oriented theory of innovation systems An important weakness of innovation system theory is a neglect of the international dimension. There is a tendency to define a NIS as a relatively closed system, even when dealing explicitly with the impact of globalization. A central proposition rests on dynamic agglomeration economies: interactive learning requires co-location, hence a preference for national linkages. (Ernst 1) Ernst illustrates his point of view around the most developed (and the most developing) sector of industry in the world today, which is information technology. He asserts that electronics equipment and components, software and information services, audio and video, and communication technologies (this includes e-commerce and web services) are all beyond the rigid understanding of the traditional national innovation system as was originally conceived by individuals, institutions, and governments. The changes that happened in the last 25 years have brought new problems for the concept of national innovation system. According to Mytelk, this is due to two main factors: â€Å"First, over the past two decades, production has become more knowledge intensive across a broad spectrum of industries from the shrimp and salmon fisheries in the Philippines and Chile, the forestry and flower enterprises in Kenya and Colombia, to the furniture, textile and clothing firms of Denmark, Taiwan and Thailand. Second, competition has both globalized and become more innovation-based† (15-16). It is, on the other hand, important to note that firms benefit from â€Å"sharing knowledge and reduce costs by jointly sourcing services and suppliers† This on-going process of knowledge exchange will always have a positive influence on all the procedures and results of the flow of information and knowledge and will create more opportunities for co-operation in research and developments experiences and projects. â€Å"Local training institutions and a sound infrastructure can provide further benefits for companies. Moreover, rivalry between firms can stimulate competitiveness. To note also that life quality and other non economic factors can be just as important in determining growth† (Carrin et al. 24). It is necessary for the innovation systems to evolve according to the evolution of the various elements that shape research and technology today. For the concept of innovation system to survive with success, new factors should be introduced within its structure in order to enable it to keep its competitiveness and growth, keeping in mind that this should be done in a way that turns the changes that occurred because of globalization into advantages, not disadvantages. Ernst draws our attention to the bright side, stating that â€Å"globalization enhances the dispersion of knowledge across firm boundaries and national borders. Such dispersion however has remained concentrated, due to the continuous impact of agglomeration economies† (30). CONCLUSION The idea behind the concept of national innovation system, just as anything other theory or structure, should evolve†¦ And this is exactly what happened. Scientific research, technological endeavours, and industrial successes do not depend on the organisation of institutions and on the flow of information within the national boundary alone, they interact with realities created and introduced by a newly shaping world with no borders and no geopolitical boundaries. The policies that deal with the flow and exchange of information and knowledge should deal with international effects and beyond-the-borders factors that can, and will, ultimately influence the national realities. Since the time of the concept’s first presentation by Freeman (1987) and Dosi et al (1988), many changes took place in what concerns the analysis and the policies regarding its methods and implementation; this is due to the enormous changes that happened in the various scientific and technological fields. The concept of national innovation system is a precious tool that should not be dropped because of globalisation; instead, it should be reshaped to cover the elements that did not exist previously. It should encourage the collaboration and the continuous flow and distribution of information and knowledge within the country itself, and then within the regional and international space. NIS should be re-developed to cover national, regional, and multi-national corporational level. Works Cited Mytelka, Lynn K. â€Å"Local Systems of Innovation in a Globalized World Economy.† Industry and Innovation, 7.1 (2000): 15-32. Park, Yongtae and Gwangman Park. When does a national innovation system start to exhibit systemic behavior? Industry and Innovation 10.4 (2003): 403 414. Nelson, Richard and Nathan Rosenberg. Technical Innovation and National Systems. National Innovation Systems: A Comparative Analysis. Ed. Richard Nelson. New York, U.S.A.: Oxford University Press, 1993 3. National Research Council. Harnessing Science and Technology for Americas Economic Future. Washington, D.C. U.S.A.: National Academy Press, 1999. Biegelbauer, Peter and Susana Borras. Innovation Policies in Europe and the Us: The New Agenda Hampshire, England: Ashgate Publishing Limited, 2003. Hulsink, Willem. Regional Clusters in ICT. Amsterdam, The Netherlands: Boom Publishers, 2002. The Organisation for Economic Co-operation and Development. Dynamising National Innovation Systems. France: OECD Publications, 2002. Ernst, Dieter. How globalization reshapes the geography of innovation systems. 24 May 1999. 06 September 2006. http://geein.fclar.unesp.br/reunioes/quinta/arquivos/140306_Ernst_99_globalization_1_.p df>. Carrin, Bart, et al. Science-Technology-Industry Network. September 2004. 07 September 2006. .