<noframes id="ixm7d"><noframes id="ixm7d"><rt id="ixm7d"><delect id="ixm7d"></delect></rt><noframes id="ixm7d"><rt id="ixm7d"><rt id="ixm7d"></rt></rt><rt id="ixm7d"></rt> <noframes id="ixm7d"><rt id="ixm7d"><delect id="ixm7d"></delect></rt><delect id="ixm7d"></delect><bdo id="ixm7d"></bdo><rt id="ixm7d"></rt><bdo id="ixm7d"></bdo><noframes id="ixm7d"><rt id="ixm7d"><rt id="ixm7d"></rt></rt><rt id="ixm7d"><rt id="ixm7d"></rt></rt><noframes id="ixm7d"><rt id="ixm7d"></rt><noframes id="ixm7d"><rt id="ixm7d"></rt> <noframes id="ixm7d"><rt id="ixm7d"></rt><noframes id="ixm7d"><noframes id="ixm7d"><noframes id="ixm7d"><rt id="ixm7d"></rt><noframes id="ixm7d"><noframes id="ixm7d"><noframes id="ixm7d"><rt id="ixm7d"></rt><noframes id="ixm7d"><rt id="ixm7d"></rt><noframes id="ixm7d"><rt id="ixm7d"></rt><noframes id="ixm7d">

英文翻譯mobilerobot

2023-07-02

第一篇:英文翻譯mobilerobot

英文翻譯

他的研究結果并不明朗,往往揭示出一個微弱的甚至是消極的關于留存收益的FDI與稅務的反映機制。這項研究明顯對哈特曼模型投出了質疑,但一直沒有大的動作嘗試用更好的數據或方法進行重新估計。

文獻中確實體現了Slemrod(1990)的想法,他認為處理雙重征稅的政策可能會影響稅收反應。常見的區別是不對母國以外的收入進行征稅的國家之間,免征國外賺來的收入的納稅義務,以及對母公司潛在的應納稅額收取的全球性稅務,但可能會采取多種方式來處理國外收入來避免跨國公司的雙重征稅。處理雙重征稅的問題有兩個標準的方法,為本國提供信貸或者扣除由跨國公司取得的國外納稅收益。

當研究人員開始研究1986年美國稅制改革對于在美國的外來直接投資的顯著性影響時,這些稅務處理辦法對于影響FDI和稅收分析的潛力,在研究文獻中起到了至關重要的作用。斯科爾斯和歐勝(1990)推測,當美國的稅率增加,美國在全球范圍的跨國公司所產生的FDI可能會增加!這看似有悖常理的概念來自于認識到了信用體系,例如,跨國公司不會在全球征稅系統下看到應納稅額有所增加。在另一方面,美國國內投資者(屬地稅制下的跨國公司)將承擔增加的美國稅項負債的全面沖擊。隨著公司在美國投標了相同的的資產,全球稅收跨國公司將會變得有利并且投入更多。盡管斯科爾斯和歐勝(1990)用很簡單的統計檢驗表明,美國外國直接投資自1986年以后的上升沒有受到其它因素的控制,但是斯文森(1994年)對斯科爾斯和歐勝的假設做了更仔細的檢查,通過對1986年的改革對不同產業FDI產生的不同影響進行檢測,這些產業在改革后稅率都發生了變化。具體來說,斯文森檢查了從1979年到1991年行業面板數據,利用從1986年的稅制改革后行業稅率的變化,發現FDI確實隨著更大的平均稅率增加而增加,尤其是全球征稅的國家。斯文森的研究一個令人擔憂的問題是,他確定了斯科爾斯和歐勝采用平均稅率數據進行檢測時結果是接受假設,但使用實際稅率則是拒絕原假設。奧爾巴赫和哈塞特(1993)提供了推翻斯科爾斯和歐勝假設的進一步證據,他們通過開發FDI模型來預測美國的投資類型,稅制改革應該對其進行支持為了使屬地納稅的跨國公司對抗全球納稅的跨國公司。特別是,他們的模型顯示,屬地稅收跨國公司應該將更多的精力放在激勵合并與收購(M&A)的外國直接投資,而全球稅收跨國公司應該已經從這樣的FDI投資新設備的情況下氣餒。這些數據似乎表明,1986年美國稅法改革后外商直接投資的大幅增加,是由于跨國公司從全球稅收國家(主要是日本和英國)推進對外直接投資。

因此,在很多方面,1986年的稅制改革對FDI的影響到今天都是一個開放性問題。然而,盡管這個特定的問題如今已經有些過時了,但是給母公司提供信用的全球稅收制度國的FDI應該相對而言對稅率不那么敏感這個話題仍然是持續的熱點。海恩斯表示(1996年)對它做出了最好的表達,根據現存文獻檢測了是否國家級稅會影響美國內區域的FDI,從而創造性地提出了屬地納稅對抗全球納稅的處理辦法。以往的研究探討國家稅收對于國家FDI的分布的影響,結果并不明顯(參見例如,考夫林,Terza和Arromdee,1991)。像聯邦稅,跨國公司根據他們在母國面臨屬地納稅還是全球納稅可能會對國家級稅有不同的反應。海因斯(1996)的實證策略是調查FDI在美國各州的分布,并且研究比較“非信用體系”的狀態下的外國投資者相對于“信用系統”下的外資投資者FDI的稅收敏感度。他發現,隨著稅率高1%,非信用體系投資者FDI的減少比信用體系投資者FDI的減少多 9%。

綜上所述,文獻精細的指出了當考慮到稅收對FDI的影響時,資料多的用不了??鐕驹谀竾蜄|道國都遭遇了大量不同級別的稅率和處理雙重征稅的政策,這能極大的改變稅收對鼓勵跨國公司投資的影響。正如上文所提到的,經驗方法和數據樣本相差甚遠,因此稅(以及在美國1986年的稅收改革之類的)在多大程度上影響FDI仍然是顯著的問題。證據似乎更有力的說明了,一個跨國公司在客國申報納稅而形成的處理國外稅的信用系統是相對無關緊要的。

文獻中還存在其他的弱點需要解決。首先,所有上述考察(最好)產業層面的數據模型的研究通常是屬于公司層面的活動。這可以通過依靠理論來解釋經驗證據來制造問題。這方面最明顯的例子是使用平均稅率為利率變量,這在變量設置上是一個完全的錯誤。平均或實際稅率是否作為稅務義務的的測度很少被討論,但正如作為例證的斯文森(1994)研究所說,它對FDI會產生十分不同的影響。

文獻也只是最近才開始研究除企業所得稅以外的其他稅。例如,德賽,弗利和海因斯(2004)最近的工作論文提出的證據表明,間接營業稅對FDI的影響與企業所得稅對FDI的影響是一定范圍內是相同的。與此類似,雙邊國際稅收協定對FDI的影響直到最近仍然是未被開發的實證課題。關于談判減少國家在其他事項上的代扣所得稅的稅收協定有成千上萬個。沃德-Dreimeier(2003年)和Blonigen和Davies(2004)發現沒有證據表明這些條約在任何顯著的方面影響FDI的活動. 3.3機構

機構的質量有可能是影響FDI活動的一個重要的決定因素,特別是對于由于各種原因而較不發達的國家。首先對資產欠缺法律保護增加了企業進行投資的資產不太可能征收的機會。機構質量差需要運作良好的市場(和/或貪污),這就增加了做生意的成本,也就減少了FDI的活動。最后,在一定程度上機構質量差導致基礎設施差(即公共產品),當FDI確實進入市場則預計盈利能力會下降。

盡管這些基本假設是無爭議的,但估計機構對外國直接投資的影響幅度是困難的,因為沒有任何準確的機構測量。大部分措施是一個國家的政治,法律和一些經濟體制的復合指數,根據官員或熟悉國家機制的商人的調查結果制定。由于受訪者來自不同的國家導致國家間的可比性值得商榷。另外,機構通常會保持長久,因此隨著時間的推移在一個國家內很少會發生有意義的變化。

由于這些原因,盡管跨國FDI研究通常包括機構以及或者貪污腐敗的措施,但并不經常把它作為分析的重點。魏的論文(2000年;2000B)是一個例外,表明各種腐敗指數與FDI有著強烈的負相關性,但其他的研究中沒有發現這樣的證據(如惠勒和么,1992年)。海因斯(1995)提供了一個有趣的“自然實驗”的方法,通過研究1977年的美國反海外腐敗法中關于處罰美國跨國公司賄賂外國官員的規定。他估計在法案實行的之后一段時間后會發現該法案對FDI的負面效應。這種自然實驗分析為將來提出更令人信服的證據帶來了希望,盡管發現這樣的天然實驗是非常困難的。 3.4 貿易保護

外國直接投資和貿易保護之間的假設聯系被大多數貿易經濟學家看作是還算明朗的。即高貿易保護應該讓企業更有可能替代子公司生產出口以避免貿易生產成本。這通常被稱為關稅跳躍投資。也許是因為這個理論相當簡單而且尋常,一般情況下很少有研究專門檢驗這一假設。另一個可能的原因是數據驅動。在各行業間一致的非關稅保護形式很難進行量化。許多企業層面的研究采用產業級別的措施來控制各種貿易保護方案,但往往結果好壞參半,其中包括Grubert和Mutti(1991),科格特和張(1996),和Blonigen(1997)。一種替代產業階層措施是通過反傾銷給企業提供特定的相當大的反傾銷稅來實現的。公司面臨著要使用更加精確的措施來應對法律保護,Belderbos(1997)和Blonigen(2002)均發現關稅跳躍FDI更有力的證據,盡管Blonigen的分析結果強烈暗示出這種反應只有總部設在發達國家的跨國公司才能看到。這可能是關稅跳躍貿易保護與其他措施混合的另一個原因,外商直接投資需要大量的費用,很多小出口企業可能無法融資或尋找有利可圖的方面。事實上,貿易保護可以明確地針對FDI較少的進口來源地。這表明外國直接投資和貿易保護可能是內源性的,關于這一問題幾乎沒有被實證過。有一個例外是Blonigen和FIGLIO(1998年),他們發現的證據表明,增加外國直接投資進入美國參議員的州或美國眾議院的地區,會增加他們把票投給進一步的貿易保護的可能性。 3.5 貿易效應

先前討論到這一點的局部均衡研究在很大程度上忽略了外商直接投資的貿易影響,而這與潛在的FDI 的變化拉動力有著密切的聯系??赡茏畛R肍DI的動機是作為出口到東道國家的替代品。由于巴克利和卡森(1981)的模型所呈現出的,可以認為出口是固定成本較低,但運輸和貿易壁壘可變成本較高。與子公司服務于同一市場引入FDI允許大幅降低這些可變成本,但可能涉及更高的大于出口成本的固定成本。這表明了一個自然發展規律,一旦國外市場對跨國公司的產品需求達到足夠大的規模,那么將會從出口發展到FDI。

在早期的論文中利普西和Weiss(1981;1984)對美國對東道國的FDI 以及出口進行回歸分析,發現了一個正相關性,這違背了FDI替代出口的原理。然而,這些論文忽略了東道國市場這個變量的內生性,這一變量可以將跨國公司引進FDI以及出口產品的意愿朝著相同的方向增加或者減少。Grubert和Mutti(1991)根據出口銷售,使用了利普西和Weiss(1981)類似的數據得到了負相關的回歸結果,盡管結果在統計學上是不顯著的。

Blonigen(2001)認為,問題在于貿易流量,要么是用最終產品替代了跨國公司的分支機構在同一國家生產產品,要么是將生產最終產品所使用的中間產品作為流量統計。前一種情況導致“貿易”和“外國直接投資”之間的負相關關系,而后者顯示兩者之間的正相關關系。 Blonigen使用日本出口美國的10位制的關稅協調制度下產品級別的貿易和FDI數據,結果顯示日本對美國的FDI增加引起日本生產這些產品的中間產品的出口也增加了,但是最終產品的出口卻下降了。赫德和里斯(2001)和斯文森(2004)分別使用日本企業層面的數據以及美國行業層面的數據提出了類似的證據。

上述討論中一個潛在的問題是企業之間的關系(如投入到分銷商的供應商),有可能影響FDI決策。日本企業往往在供應商和分銷商間有更正式和公開的連接,所謂縱向聯系財閥。赫德,里斯和斯文森(1995)探討了是否其他日本企業在美國的一個州或鄰近州的選址會通過企業相似的縱向聯系財閥影響隨后日本跨國公司的FDI。他們發現確實是這樣,特別對于汽車部門來說,并且把這個作為有正式的供應商-經銷商關系的企業之間產生了聚集經濟的證據。

其他研究考慮了橫向聯系財閥對日本FDI的影響。橫向聯系財閥是在許多行業間企業集團的分組,但中心是圍繞日本大型銀行。這樣的活動對FDI的三大潛在影響已經被提出。主要的潛在影響是利用橫向聯系財閥的銀行作為低成本集資的來源,這將增加公司的整體投資,包括對外投資。正如霍什、卡什亞普和Scharfstein(1991)提到的,有一個財閥銀行成員這樣的關系可以降低監督成本,降低資金成本。他們對日本制造業企業進行分析發現的證據表明,比起其他企業這些橫向聯系財閥的公司在投資活動中比較少受限。隨后的研究中檢測了是否橫向聯系財閥增加了日本企業的對外直接投資,但往往結果很不顯著或敏感(例如,見Belderbos和Sleugwaegen,1996)。

Blonigen,埃利斯和福斯滕(2005年)注意到了橫向聯系財閥的另一個可能的影響俄林模型的精確檢驗公式的形成遭遇了巨大的障礙,當有多于兩個國家和地區的兩個以上的生產要素時該模型預測貿易流量則可能有較大的不確定性。

第二篇:英文翻譯

無 錫 職 業 技 術

畢 業 設 計 說 明 書(英 文 翻 譯)

原文:

譯文:(小四號、宋體)

5

第三篇:英文翻譯

人工智能埃丹:工程設計,分析和制作的人工智能

很多公司企圖提高定制當今競爭激烈的全球市場,都利用產品系列和平臺基礎來開發產品的種類、縮短交易時間和增加收入。一個成功的產品系列的關鍵是這個產品平臺通過添加,刪除,或用一個或多個模塊用一個或者多個維度的具體目標利基市場來得到產品平臺或者擴展產品平臺。這項初步工程設計領域在過去十年中迅速成熟,本文提供了一個研究活動發生在這段時間進行基于大批量定制的產品開發平臺產品族系列綜合評述的設計。用一種產品系列來評估產品平臺杠桿策略的技術通常用于審查和度量評估產品平臺和產品系列的有效性。特別強調的是優化方法和人工智能技術協助的產品系列設計方法和基于平臺的產品開發?;趙eb的產品平臺定制系統仍需要討論。學術界和工業界的例子就是之前在前文中強調了的產品系列和產品平臺的好處。本文討論的結論是運用潛在的研究領域幫助構建產品系列的制造和設計之間的橋梁。

當今競爭激烈的全球市場重新定義著許多公司做生意的方式。這種新形式的競爭優勢是大規模定制,而且正如pine說的“一種新的查看競爭業務的方式,這種不以犧牲效率,有效性和花費的方式進行識別和實現個人客戶最主要的需要和需求。”在pine對大規模定制的開創性探索中,他認為“消費者可能已經不再被這個巨大的同質市場聚集在一起,但是個人的需求可能與眾不同并且這些需求是可以確定和滿足的。”他把越來越多的注意力放在產品的種類和客戶來滿足市場的飽和度和提高客戶滿意度的需求:新產品必須不同于市場上現有的產品,并且盡可能的滿足客戶的需求。作為市場產生的動力,從汽車行業研究的和實證調查制造公司確認的這些結果來看,佩特森和猶梅里補充說明“正如許多公司知道的那樣全球化市場的興起已經從根本上改變了競爭,那就是強制壓縮產品的開發時間,努力增加產品的種類。”類似的主題被沃克曼貫穿了這篇文章(1997),他在歐洲為“客戶驅動”市場研究行業的反應。

社會的發展、技術的進步、產品的更新、生活節奏的加快等等一系列的社會與物質的因素,使人們在享受物質生活的同時,更加注重產品在“方便”、“舒適”、“可靠”、“價值”、“安全”和“效率”等方面的評價,也就是在產品設計中常提到的人性化設計問題。

所謂人性化產品,就是包含人機工程的的產品,只要是“人”所使用的產品,都應在人機工程上加以考慮,產品的造型與人機工程無疑是結合在一起的。我們可以將它們描述為:以心理為圓心,生理為半徑,用以建立人與物(產品)之間和諧關系的方式,最大限度地挖掘人的潛能,綜合平衡地使用人的肌能,保護人體健康,從而提高生產率。僅從工業設計這一范疇來看,大至宇航系統、城市規劃、建筑設施、自動化工廠、機械設備、交通工具,小至家具、服裝、文具以及盆、杯、碗筷之類各種生產與生活所創造的“物”,在設計和制造時都必須把“人的因素”作為一個重要的條件來考慮。若將產品類別區分為專業用品和一般用品的話,專業用品在人機工程上則會有更多的考慮,它比較偏重于生理學的層面;而一般性產品則必須兼顧心理層面的問題,需要更多的符合美學及潮流的設計,也就是應以產品人性化的需求為主。

人機工程學是一門新興的邊緣科學。它起源于歐洲,形成和發展于美國。人機工程學在歐洲稱為Ergonomics,這名稱最早是由波蘭學者雅斯特萊鮑夫斯基提出來的,它是由兩個希臘詞根組成的。“ergo”的意思是“出力、工作”,“nomics”表示“規律、法則”的意思,因此,Ergonomics的含義也就是“人出力的規律”或“人工作的規律”,也就是說,這門學科是研究人在生產或操作過程中合理地、適度地勞動和用力的規律問題。人機工程學在美國稱為“Human Engineering”(人類工程學)或“Human Factor Engineering”(人類因素工程學)。日本稱為“人間工學”,或采用歐洲的名稱,音譯為“Ergonomics”,俄文音譯名“Эргнотика”在我國,所用名稱也各不相, ,有“人類工程學”、“人體工程學”、“工效學”、“機器設備利用學”和“人機工程學”等。為便于學科發展,統一名稱很有必要,現在大部分人稱其為“人機工程學”,簡稱“人機學”。“人機工程學”的確切定義是,把人—機—環境系統作為研究的基本對象,運用生理學、心理學和其它有關學科知識,根據人和機器的條件和特點,合理分配人和機器承擔的操作職能,并使之相互適應,從而為人創造出舒適和安全的工作環境,使工效達到最優的一門綜合性學科。

參考文獻

[1]鮑德溫,C.Y.,和克拉克,K.B. (2000年)。設計規則:第1卷的力量的模塊化。馬薩諸塞州劍橋:麻省理工學院出版社。

[2]貝爾蒂,S.,杰爾馬尼,M.,Mandorli,樓與奧托,何(2001)。設計產品系列安內的中小型比如輸入-獎。第13屆詮釋。機密。工程設計(卡利,S.,達菲,A.,MCMA-漢,C.,&華萊士,K.,編著),英國格拉斯哥,頁507-514。

[3]沃馬克,J.P.,瓊斯,D.T.,與魯斯,D.(1990)。改變了機 世界。紐約:羅森聯營公司。

[4]沃特曼,JC,Muntslag,DR,和TIMMERMANS,PJM編。 (1997)??蛻趄寗拥闹圃?。紐約:查普曼和霍爾。

[5]Yigit,AS,Ulsoy,AG&Allahverdi,A.(2002)。優化的模塊化產品設計的可重構制造。中國智能制造13(4),309-316。

山東交通學院畢業設計(論文)

AI EDAM: Arti?cial Intelligence for Engineering Design, Analysis and Manufacturing

In aneffort to improve customization for today’s highly competitive global mar ketplace, many companies are utilizing product families and platform-based product development to increase variety, shorten lead times, and reduce costs. The key to a successful product family is the product platform from which it is derived either by adding, removing, or substituting one or more modules to the platform or by scaling the platform in one or more dimensions to target specific market niches. This nascent field of engineering design has matured rapidly in the past decade, and this paper provides a comprehensive review of the flurry of research activity that has occurred during that time to facilitate product family design and platform -based product de velopment for mass customization. Techniques for identifying platform leveraging strategies within a product family are reviewed along with metrics for assessing the effectiveness of product plat forms and product families. Special emphasis is placed on optimization approaches and artificial intelligence techniques to assist in the process of product family design and platform based product development. Web-basedsystems for product platform customization are also discussed.

Examples from both industry and a cademia are presented throughout the paper to highlight the benefits of product families and product platforms. The paper concludes with a discussion of potential areas of research to help bridge the gap between planning and managing families of products and designing and manufacturing them.Today’s h igh ly competitive global mark etplace is redefining the way many companies do business. The new form ofcompetitive advantage is mass customization, and is, as Pine~1993a, p. xiii! says, ―a new way of viewing business competition , one that makes the identification and fulfillment of the wants and needs of individual cu stomers paramount without sacrificing efficiency, effectiveness, and low costs.‖In his seminal text on mass customization, Pine ~1993a,p. 6! argues that ―customers can no longer be lumped together in a huge homo geneous market, but are individuals whose individual wants and needs can be ascertained and fulfilled.‖ He attributes the increasing attention on product variety and customer demand to the saturation of the market and the need to improve customer satisfaction: newproducts must be different from what is already in th e market and must meet cu stomer needs more comp letely. Sand-erson and Uzu meri ~1997, p. 3! add that ―the emergence of global markets has fundamentally altered competition as many firms have known it‖ with the resulting market dynamics ―forcing the compression of product development times and expansion of product variety.‖ Findings from studies of the automotive industry ~Womacketal., 1990; MacDuffieetal., 1996; Alford et al., 2000! and empirical su rveys of manufacturing firms ~Chinnaiah et al., 1998; Duray et al.,2000! confirm these trends. Similar themes pervade th e tex t by Wortmann et al. ~1997!, who examine industry’s response in Europe to the ―customer-driven‖ market.

The social development and technological progress, product updates, rhythm of life pace, and so on a series of social and physical factors, so that people in the enjoyment of material life at the same time, pay more attention to products in the "convenience", "comfortable", "reliable", "value", "security" and "efficiency evaluation, is often mentioned in the product design, personalized design.

山東交通學院畢業設計(論文)

The so-called user-friendly products that includes the man-machine engineering products, as long as it is "person" used in products should be considered in the man-machine engineering, product design and ergonomics is undoubtedly together. We can be described as: Center for psychological, physiological radius in order to build a harmonious relationship between people and things (products), maximum to tap potential, comprehensive balanced use of muscle, protection of human health, so as to improve the productivity. Only from the category of industrial design, to aerospace systems, urban planning, construction, automatic chemical plant, machinery and equipment, transport, small furniture, clothing, stationery and flower pots, cups, bowls and chopsticks, such as production and life to create the "objects", in the design and manufacture must take the "human factor" as an important condition to consider. If the products of distinguished professional supplies and general supplies and professional activities in the man-machine engineering will have more consideration, it is more emphasis on in physiology level; and general products must balance the psychological problems, need to be more in line with the aesthetics and the trend of the design, that is, should be to the needs of the product humanization. Ergonomics is a rising edge science. It originated in Europe, formed and developed in the United States.. Ergonomics in Europe known as ergonomics, the name of the first is by Poland scholar Ya J Tele Bo J J Kiti, it is composed of two Greek roots. The meaning of "ergo" is "output," and nomics said the meaning of "law, rule. Therefore, ergonomics meaning is" people contribute to the rules "or" working rules ". That is to say, the subject is the research in the production or operation of reasonable, moderate labor and force of law of. Ergonomics in the United States known as "Engineering Human" (Human Engineering) or "Factor Engineering Human" (Human Factor Engineering). Japan, known as the "human engineering", or the European name, transliterated as "Ergonomics", Russian transliteration names "middle, in this paper the author, with large K" in our country, with the name of a also varied, with "human engineering", "human engineering", "Ergonomics", "equipment using learning" and "Ergonomics". For the development of subject, it is necessary to unify the name

山東交通學院畢業設計(論文)

of the subject. Now most people call it ergonomics. ". The exact definition of "Ergonomics" is the humanenvironment system as the basic research object, the use of physiological science, psychology and other related disciplines of knowledge, according to the conditions and characteristics of man and machine, a reasonable allocation of human and machine bear operation functions, and mutual adaptation, in order to create the comfortable and safe working environment for people, to optimize the efficiency of a comprehensive discipline.

山東交通學院畢業設計(論文)

REFERENCES

[1]Baldwin, C.Y. , & Clark, K. B. ~2000!. Desig n Rules: Volume 1. Th e Power of Modularity. Camb ridge, MA: MIT Press. [2]Berti, S., Germani, M., Mandorli, F., & Otto, H.E. ~2001!. Design ofproduct families—An example within a small and medium sized enterprise. 13thInt.Conf. Engineering Design ~Culley, S., Duffy, A., McMahon , C., & Wallace, K.,Eds.!, Glasgow, UK, pp . 507–514.

[3]Womack, J.P., Jon es, D.T., & Roos, D. ~1990!. The Machine that ChangedtheWorld. New York: Rawson Associates.

[4]Wortmann, J.C., Muntslag, D.R., & Timmermans, P.J. M., Eds. ~1997!.Custo mer-Driven Manufacturing. New York: Chapman & Hall.

[5]Yigit, A.S., Ulsoy, A.G., & Allahverdi, A. ~2002!. Optimizing modular

product design for reconfigurable manufacturing. Journal of Intelligent Manufacturing 13(4), 309 –316.

第四篇:英文翻譯

信息與控制工程學院畢業設計(論文)英文翻譯

微機發展簡史中英文文獻翻譯

摘要:微型計算機已經成為當代社會不可缺少的工作用品之一,在微型計算機的幫助下,人類能夠完成以前難以完成的復雜的運算,加快了科技的發展進程。本篇講述了微型計算機的起源以及其發展,包括硬件與軟件的發展已經整個微型計算機行業的發展過程。 關鍵字:微型計算機;發展進程

第 1 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

1最早時候的計算機

第一臺存儲程序的計算開始出現于1950前后,它就是1949年夏天在劍橋大學,我們創造的延遲存儲自動電子計算機(EDSAC)。

最初實驗用的計算機是由像我一樣有著廣博知識的人構造的。我們在電子工程方面都有著豐富的經驗,并且我們深信這些經驗對我們大有裨益。后來,被證明是正確的,盡管我們也要學習很多新東西。最重要的是瞬態一定要小心應付,雖然它只會在電視機的熒幕上一起一個無害的閃光,但是在計算機上這將導致一系列的錯誤。

在電路的設計過程中,我們經常陷入兩難的境地。舉例來說,我可以使用真空二級管作為門電路,就像在EDSAC中一樣,或者在兩個柵格之間用帶控制信號的五級管,這被廣泛用于其他系統設計,這類的選擇一直在持續著直到邏輯門電路開始應用。在計算機領域工作的人都應該記得TTL,ECL和CMOS,到目前為止,CMOS已經占據了主導地位。

在最初的幾年,IEE(電子工程師協會)仍然由動力工程占據主導地位。為了讓IEE 認識到無線工程和快速發展的電子工程并行發展是它自己的一項權利,我們不得不面對一些障礙。由于動力工程師們做事的方式與我們不同,我們也遇到了許多困難。讓人有些憤怒的是,所有的IEE出版的論文都被期望以冗長的早期研究的陳述開頭,無非是些在早期階段由于沒有太多經驗而遇到的困難之類的陳述。

60年代初,個人英雄時代結束了,計算機真正引起了重視。世界上的計算機數量已經增加了許多,并且性能比以前更加可靠。這些我認為歸因與高級語言的起步和第一個操作系統的誕生。分時系統開始起步,并且計算機圖形學隨之而來。

綜上所述,晶體管開始代替真空管。這個變化對當時的工程師們是個不可回避的挑戰。他們必須忘記他們熟悉的電路重新開始。只能說他們鼓起勇氣接受了挑戰,盡管這個轉變并不會一帆風順。

第 2 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

2計算機的發展

2.1小規模集成電路和小型機

很快,在一個硅片上可以放不止一個晶體管,由此集成電路誕生了。隨著時間的推移,一個片子能夠容納的最大數量的晶體管或稍微少些的邏輯門和翻轉門集成度達到了一個最大限度。由此出現了我們所知道7400系列微機。每個門電路或翻轉電路是相互獨立的并且有自己的引腳。他們可通過導線連接在一起,做成一個計算機或其他的東西。

這些芯片為制造一種新的計算機提供了可能。它被稱為小型機。他比大型機稍遜,但功能強大,并且更能讓人負擔的起。一個商業部門或大學有能力擁有一臺小型機而不是得到一臺大型組織所需昂貴的大型機。

隨著微機的開始流行并且功能的完善,世界急切獲得它的計算能力但總是由于工業上不能規模供應和它可觀的價格而受到挫折。微機的出現解決了這個局面。

計算消耗的下降并非起源與微機,它本來就應該是那個樣子。這就是我在概要中提到的“通貨膨脹”在計算機工業中走上了歧途之說。隨著時間的推移,人們比他們付出的金錢得到的更多。

2.2硬件的研究

我所描述的時代對于從事計算機硬件研究的人們是令人驚奇的時代。7400系列的用戶能夠工作在邏輯門和開關級別并且芯片的集成度可靠性比單獨晶體管高很多。大學或各地的研究者,可以充分發揮他們的想象力構造任何微機可以連接的數字設備。在劍橋大學實驗室力,我們構造了CAP,一個有令人驚奇邏輯能力的微機。

7400在70年代中期還不斷發展壯大,并且被寬帶局域網的先驅組織Cambridge Ring所采用。令牌環設計研究的發表先于以太網。在這兩種系統出現之前,人們大多滿足于基于電報交換機的本地局域網。

令牌環網需要高可靠性,由于脈沖在令牌環中傳遞,他們必須不斷的被放大并且再生。是7400的高可靠性給了我們勇氣,使得我們著手Cambridge Ring.項目。

第 3 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

2.3更小晶體管的出現

集成度還在不斷增加,這是通過縮小原始晶體管以致可以更容易放在一個片子上。進一步說,物理學的定律占在了制造商的一方。晶體管變的更快,更簡單,更小。因此,同時導致了更高的集成度和速度。

這有個更明顯的優勢。芯片被放在硅片上,稱為晶片。每一個晶片擁有很大數量的獨立芯片,他們被同時加工然后分離。因為縮小以致在每塊晶片上有了更多的芯片,所以每塊芯片的價格下降了。

單元價格下降對于計算機工業是重要的,因為,如果最新的芯片性能和以前一樣但價格更便宜,就沒有理由繼續提供老產品,至少不應該無限期提供。對于整個市場只需一種產品。

然而,詳細計算各項消耗,隨著芯片小到一定程度,為了繼續保持產品的優勢,移到一個更大的圓晶片上是十分必要的。尺寸的不斷增加使的圓晶片不再是很小的東西了。最初,圓晶片直徑上只有1到2英寸,到2000年已經達到了12英寸。起初,我不太明白,芯片的縮小導致了一系列的問題,工業上應該在制造更大的圓晶片上遇到更多的問題?,F在,我明白了,單元消耗的減少在工業上和在一個芯片上增加電子晶體管的數量是同等重要的,并且,在風險中增加圓晶片廠的投資被證明是正確的。

集成度被特殊的尺寸所衡量,對于特定的技術,它是用在一塊高密度芯片上導線間距離的一半來衡量的。目前,90納米的晶片正在被建成。

2.4單片機

芯片每次的縮小,芯片數量將減少;并且芯片間的導線也隨之減少。這導致了整體速度的下降,因為信號在各個芯片間的傳輸時間變長了。

漸漸地,芯片的收縮到只剩下處理器部分,緩存都被放在了一個單獨的片子上。這使得工作站被建成擁有當代小型機一樣的性能,結果搬倒了小型機絕對的基石。正如我們所知道的,這對于計算機工業和從事計算機事業的人產生了深遠的影響

自從上述時代的開始,高密度CMOS硅芯片成為主導。隨著芯片的縮小技術的發展,數百萬的晶體管可以放在一個單獨的片子上,相應的速度也成比例的增加。

為了得到額外的速度。處理器設計者開始對新的體系構架進行實驗。一次成功的實驗都預言了一種新的編程方式的分支的誕生。我對此取得的成功感到非常驚奇。它

第 4 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

導致了程序執行速度的增加并且其相應的框架。

同樣令人驚奇的是,通過更高級的特性建立一種單片機是有可能的。例如,為IBM Model 91開發的新特性,現在在單片機上也出現了。

Murphy定律仍然在中止的狀態。它不再適用于使用小規模集成芯片設計實驗用的計算機,例如7400系列。想在電路級上做硬件研究的人們沒有別的選擇除了設計芯片并且找到實現它的辦法。一段時間內,這樣是可能的,但是并不容易。

不幸的是,制造芯片的花費有了戲劇性的增長,主要原因是制造芯片過程中電路印刷版制作成本的增加。因此,為制作芯片技術追加資金變的十分困難,這是當前引起人們關注的原因。

2.5半導體前景規劃

對于以上提到的各個方面,在部分國際半導體工業部門的精誠合作下,廣泛的研究與開發工作是可行的。

在以前美國反壟斷法禁止這種行為。但是在1980年,該法律發生了很大變化。預競爭概念被引進了該法律。各個公司現在可以在預言競爭階段展開合作,然后在規則允許的情況下繼續開發各自的產品。

在半導體工業中,預競爭研究的管理機構是半導體工業協會。1972年作為美國國內的組織,1998年成為一個世界性的組織。任何一個研究組織都可加入該協會。

每兩年,SIA修訂一次ITRS(國際半導體科學規劃),并且逐年更新。1994年在第一卷中引入了“前景規劃”一詞,該卷由兩個報告組成,寫于1992年,在1993年提交。它被認為是該規劃的真正開始。

為了推動半導體工業的向前發展,后續的規劃提供最好的可利用的工業標準。它們對于15年內的發展做出了詳細的規劃。要達到的目標是每18個月晶體管的集成度增加一倍,同時每塊芯片的價格下降一半,即Moore定律。

對于某些方面,前面的道路是清楚的。在另一方面,制造業的問題是可以預見的并且解決的辦法也是可以知道的,盡管不是所有的問題都能夠解決。這樣的領域在表格中由藍色表示,同時沒有解決辦法的,加以紅色。紅色區域往往稱為紅色磚墻。

規劃建立的目標是現實的,同時也是充滿挑戰的。半導體工業整體上的進步于該規劃密不可分。這是個令人驚訝的成就,它可以說是合作和競爭共同的價值。

值得注意的是,促進半導體工業向前發展的主要的戰略決策是相對開放的預競爭

第 5 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

機制,而不是閉關鎖國。這也包括大規模圓晶片取得進展的原因。

1995年前,我開始感覺到,如果達到了不可能使得晶體管體積更小的臨界點時,將發生什么。懷著這樣的疑惑,我訪問了位于華盛頓的ARPA(美國國防部)指揮總部,在那,我看到1994年規劃的復本。我恍然大悟,當圓晶片尺寸在2007年達到100納米時,將出現嚴重的問題,在2010年達到70納米時也如此。在隨后的2004年的規劃中,當圓晶片尺寸達到100納米時,也做了相應的規劃。不久半導體工業將發展到那一步。

從1994年的規劃中我引用了以上的信息,還有就是一篇提交到IEE的題目為CMOS終結點的論文和在1996年2月8號的Computing上討論的一些題目。

我現在的想法是,最終的結果是表示一個存在可用的電子數目從數千減少到數百。在這樣的情況下,統計波動將成為問題。最后,電路或者不再工作,或者達到了速度的極限。事實上,物理限制將開始讓他們感覺到不能突破電子最終的不足,原因是芯片上絕緣層越來越薄,以致量子理論中隧道效應引起了麻煩,導致了滲漏。

相對基礎物理學,芯片制造者面對的問題要多出許多,尤其是電路印刷術遇到的困難。2001年更新2002年出版的規劃中,陳述了這樣一種情況,照目前的發展速度,如果在2005年前在關鍵技術領域沒有取得大的突破的話,半導體業將停止不前。這是對“紅色磚墻”最準確的描述。到目前為止是SIA遇到的最麻煩的問題。2003年的規劃書強調了這一點,通過在許多地方加上了紅色,指示在這些領域仍存在人們沒有解決的制造方法問題。

到目前為止,可以很滿意的報道,所遇到的問題到及時找到了解決之道。規劃書是個非凡的文檔,并且它坦白了以上提到的問題,并表示出了無限的信心。主要的見解反映出了這種信心并且有一個大致的期望,通過某種方式,圓晶體將變的更小,也許到45納米或更小。

然而,花費將以很大的速率增長。也許將成為半導體停滯不前的最終原因。對于逐步增加的花費直到不能滿足,這個精確的工業上達到一致意見的平衡點,依賴于經濟的整體形勢和半導體工業自身的財政狀況。

最高級芯片的絕緣層厚度僅有5個原子的大小。除了找到更好的絕緣材料外,我們將寸步難行。對于此,我們沒有任何辦法。我們也不得不面對芯片的布線問題,線越來越細小了。還有散熱問題和原子遷移問題。這些問題是相當基礎性的。如果我們

第 6 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

不能制作導線和絕緣層,我們就不能制造一臺計算機。不論在CMOS加工工藝上和半導體材料上取得多么大的進步。更別指望有什么新的工藝或材料可以使得半導體集成度每18個月翻一番的美好時光了。

我在上文中說到,圓晶體繼續縮小直到45納米或更小是個大致的期望。在我的頭腦中,從某點上來說,我們所知道的繼續縮小CMOS是不可行的,但工業上需要超越它。

2001年以來,規劃書中有一部分陳述了非傳統形式CMOS的新興研究設備。一些精力旺盛的人和一些投機者的探索無疑給了我們一些有益的途徑,并且規劃書明確分辨出了這些進步,在那些我們曾經使用的傳統CMOS方面。

2.6內存技術的進步

非傳統的CMOS變革了存儲器技術。直到現在,我們仍然依靠DRAM作為主要的存儲體。不幸的是,隨著芯片的縮小,只有芯片外圍速度上的增長——處理器芯片和它相關的緩存速度每兩年增加一倍。這就是存儲器代溝并且是人們焦慮的根源。存儲技術的一個可能突破是,使用一種非傳統的CMOS管,在計算機整體性能上將導致一個很大的進步,將解決大存儲器的需求,即緩存不能解決的問題。

也許這個,而不是外圍電路達到基本處理器的速度將成為非傳統CMOS.的最終角色。

2.7電子的不足

盡管目前為止,電子每表現出明顯的不足,然而從長遠看來,它最終會不能滿足要求。也許這是我們開發非傳統CMOS管的原因。在Cavendish實驗室里,Haroon Amed已經作了很多有意義的工作,他們想通過一個單獨電子或多或少的表現出0和1的區別。然而對于構造實用的計算機設備只取得了一點點進展。也許由于偶然的好運氣,數十年后一臺基于一個單獨電子的計算機也許是可以實現的。

第 7 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

附:英文原文

Progress in Computers Prestige Lecture delivered to IEE, Cambridge, on 5 February 2004 Maurice Wilkes Computer Laboratory University of Cambridge

The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949. These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer. As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant. In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960s

By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along. Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the

第 8 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

challenge and that the change could not have gone more smoothly.

Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else. These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department. Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation. The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.

Research in Computer Hardware.

The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.

The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.

Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.

The Relentless Drive towards Smaller Transistors

The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.

There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.

Falling unit cost was important to the industry because, if the latest chips are cheaper

第 9 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.

However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.

The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building up The single-chip computer

At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.

Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.

From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.

Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followed Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputers

Murphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easy

Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.

The Semiconductor Road Map

第 10 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry. At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.

The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.

Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.

Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall. In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls. The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner. It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.

By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.

I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS

第 11 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

end-point and related topics in Computing and delivered on 8 February 1996. The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.

There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.

It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.

However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

Insulating layers in the most advanced chips are now approaching a thickness equal to that of 5 atoms. Beyond finding better insulating materials, and that cannot take us very far, there is nothing we can do about this. We may also expect to face problems with on-chip wiring as wire cross sections get smaller. These will concern heat dissipation and atom migration. The above problems are very fundamental. If we cannot make wires and insulators, we cannot make a computer, whatever improvements there may be in the CMOS process or improvements in semiconductor materials. It is no good hoping that some new process or material might restart the merry-go-round of the density of transistors doubling every eighteen months.

I said above that there is a general expectation that shrinkage would continue by one means or another to 45 nm or even less. What I had in mind was that at some point further scaling of CMOS as we know it will become impracticable, and the industry will need to look beyond it.

Since 2001 the Roadmap has had a section entitled emerging research devices on non-conventional forms of CMOS and the like. Vigorous and opportunist exploitation of these possibilities will undoubtedly take us a useful way further along the road, but the Roadmap rightly distinguishes such progress from the traditional scaling of conventional CMOS that we have been used to. Advances in Memory Technology

第 12 頁 共 13 頁

信息與控制工程學院畢業設計(論文)英文翻譯

Unconventional CMOS could revolutionalize memory technology. Up to now, we have relied on DRAMs for main memory. Unfortunately, these are only increasing in speed marginally as shrinkage continues, whereas processor chips and their associated cache memory continue to double in speed every two years. The result is a growing gap in speed between the processor and the main memory. This is the memory gap and is a current source of anxiety. A breakthrough in memory technology, possibly using some form of unconventional CMOS, could lead to a major advance in overall performance on problems with large memory requirements, that is, problems which fail to fit into the cache. Perhaps this, rather than attaining marginally higher basis processor speed will be the ultimate role for non-conventional CMOS. Shortage of Electrons

Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done.notably by Haroon Amed and his team working in the Cavendish Laboratory.on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.

文章來源:IEEE的論文 劍橋大學,2004/2/5

第 13 頁 共 13 頁

第五篇:英文翻譯

微軟的Visual Studio產品

產品支持

包括產品

這一部分需要額外的引文進行核查。請協助添加引用可靠的消息來源改善這篇文章。今天的材料可能面臨挑戰和刪除。(2008)

微軟的Visual C + +

微軟的Visual C + +是微軟的執行C和C + +編譯器和相關語言服務和專用工具集成在Visual Studio IDE。它可以編譯在C模式下或C + +模式。對于C,它遵循的ISOÇ標準部分C99規格以及MS,具體的增加在庫的形式。[41]對于Ç+ +,它遵循的ANSIÇ+ +規范以及幾個Ç+ +0所述的功能。[42它也支持C + + / CLI規范編寫托管代碼,以及代碼的本機代碼和托管代碼(混合)混合模式。微軟位置的Visual C + +本機代碼或者代碼,其中包含本地以及管理組件的發展。 VISUAL C + +支持COM以及MFC庫中。對于MFC開發的,它提供了一組向導,用于創建和自定義MFC樣板代碼,創建GUI應用程序中使用MFC。 Visual C + +中也可以使用Visual Studio窗體設計器的圖形化設計UI。 Visual C + +中也可以使用的Windows API。它也支持使用內部函數,[43]這是公認的編譯器本身,而不是作為一個庫實現的功能。用于公開的現代CPU的SSE指令集的內在函數。的Visual C + +還包括了OpenMP(版本2.0)規范。[44]

微軟的Visual C #

微軟的Visual C#,微軟的C#語言的實現,目標的。NET Framework的語言服務,讓在Visual Studio IDE支持C#項目。盡管語言服務是一個Visual Studio的一部分,編譯器是單獨提供的。NET框架的一部分。Visual C#2008和2010編譯器支持版本3.0和4.0的C#語言規范。Visual C++ #支持Visual Studio類設計器,窗體設計器,和其他數據的設計師。[ 45 ]

微軟Visual Basic

微軟Visual Basic是微軟實施的VB.NET語言和相關工具和語言服務。據介紹,使用Visual Studio。NET(2002)。微軟已定位的Visual Basic應用程序的快速開發。[ 46 ] [ 47 ] Visual Basic可以用來作者控制臺應用程序以及GUI應用程序。像Visual C#,Visual Basic也支持Visual Studio類設計器,窗體設計器,數據在其他的設計師。如C#,VB.NET編譯器也可作為一部分的。NET框架,但語言服務,讓vb.net項目與Visual Studio開發,可作為后者的一部分。

微軟的Visual Web Developer

微軟的Visual Web Developer來創建Web站點,Web應用程序和Web服務

使用ASP.NET??梢允褂肅#或VB.NET語言。 Visual Web Developer中可以使用Visual Studio的Web設計人員以圖形化的網頁設計布局。

TeamFoundationServer

僅包括與VisualStudioTeamSystem,TeamFoundationServer的目的是協作軟件開發項目作為服務器端后臺提供源代碼管理,數據收集,報告,和項目跟蹤功能。它也包括團隊資源管理器,客戶端工具,這是集成在Visual Studio Team System中的TFS服務。

Editions

微軟的Visual Studio在以下版本或型號可用:[ 53 ]

ExpressVisual Studio Express版的Visual Studio是一套免費的個人的IDE提供精簡版的Visual Studio IDE上的每個平臺的基礎上或每個語言的基礎上,也就是說,它的開發工具支持的平臺安裝(網頁,Windows,電話)或到單個Visual Studio Shell的AppIds支持的開發語言(VB,C#)。它包括只有一小部分的工具相比其他系統。它不包括支持插件。64位編譯器不包含在Visual Studio Express版本的IDE,但可作為可以單獨安裝的Windows軟件開發工具包的一部分。最初的公告[ 55 ],表示2012的發布將僅限于創建Windows 8 Metro后(設計語言)風格的應用程序,微軟回應扭轉這一決定并宣布該桌面應用程序的開發也將支持開發者的反饋。[56]針對Microsoft的快速集成開發環境,學生和業余愛好者。 Express版本不使用完整的MSDN庫,但使用MSDN Essentials庫??煽焖偌砷_發環境的一部分的語言是:[57]

Visual Basic Express

Visual C++ Express

Visual C# Express

Visual Web Developer Express

Express for Windows Phone

Visual Studio LightSwitch

微軟的Visual Studio LightSwitch的是一個專門為創建企業現有的。NET技術和微軟平臺上構建的應用程序的IDE。應用程序產生的建筑3層:用戶界面運行在微軟的Silverlight的邏輯和數據訪問層是建立在WCF RIA Services和實體框架,在ASP.NET托管和主數據存儲支持微軟的SQL Server Express ,Microsoft SQL Server和微軟SQL Azure。 LightSwitch中也支持其他的數據源,包括Microsoft SharePoint。 LightSwitch中包括實體和實體之間的關系,實體查詢和用戶界面屏幕設計的圖形設計。業務邏輯可以在Visual Basic或

Visual C#編寫的。該工具可以安裝一個獨立的SKU或添加到Visual Studio 2010專業版和更高的集成。[58]

Visual Studio專業版

Visual Studio專業版提供了一個IDE為所有支持的開發語言。標準版的Visual Studio 2010, [59] MSDN支持MSDN Essentials或完整的MSDN庫,根據許可的。它支持XML和XSLT編輯,并可以創建部署包僅使用ClickOnce和MSI。它包括服務器資源管理器和集成工具,如Microsoft SQL Server也。然而,包含在Visual Studio 2005標準版,支持Windows Mobile的開發與Visual Studio 2008中,它是唯一可在專業版和更高版本。 Windows Phone 7開發的支持,增加在Visual Studio 2010的所有版本。不再支持Visual Studio 2010中的Windows Mobile的發展,它是取代的Windows Phone 7。

Visual Studio Premium

Visual Studio高級版包括在VisualStudio專業的工具和增加了額外的功能,如代碼度量,分析,靜態代碼分析,和數據庫單元測試。

Visual Studio Tools for Office

VisualStudioToolsforOffice是一個SDK和添加在Visual Studio,包括為微軟辦公套件的開發工具。

在此之前(為Visual Studio。NET2003和Visual Studio2005),這是只支持Visual C#和Visual Basic語言,被列入團隊獨立庫中。在Visual Studio2008中,它不再是一個單獨的庫,但包括專業版和更高版本。部署VSTO解決方案時,需要一個單獨的運行。

本文來自 99學術網(www.gaojutz.com),轉載請保留網址和出處

上一篇:windows7安裝維護教案下一篇:dreamweaver教學計劃

91尤物免费视频-97这里有精品视频-99久久婷婷国产综合亚洲-国产91精品老熟女泄火