薄荷泡水喝有什么好处| 楚楚动人什么意思| 萝莉控是什么意思| 地格是什么意思| 孕妇查凝血是检查什么| 肾病综合症是什么病| 郁结是什么意思| 来龙去脉指什么生肖| 旦辞爷娘去的旦是什么意思| 学考成绩什么时候公布| 隔的右边念什么| 茵陈是什么植物| 五官指什么| 猪筒骨配什么炖汤好| 属鼠的贵人是什么属相| mr是什么意思| 小孩脸上有白斑是什么原因| 心脏疼是什么感觉| 天公作美是什么生肖| 永恒是什么意思| 鸡米头是什么| 前列腺在什么地方| 0和1是什么意思| 芒果与什么不能一起吃| 肠胃属于什么科| 单子是什么意思| 地接是什么意思| 宝宝半夜咳嗽是什么原因| 两个虎是什么字| 红色象征什么| 医院查过敏源挂什么科| 出殡下雨是什么兆头| 上官是什么意思| 什么的树枝| 湿气太重吃什么好| 钴对人体有什么伤害| 君臣佐使是什么意思| 胃肠湿热吃什么中成药| 铜镯子对人有什么好处| p站是什么| 黄瓜是绿色的为什么叫黄瓜| 早射吃什么药最好| 辰寅卯是什么生肖| 什么生金| 凌晨是什么时辰| 郡字五行属什么| 什么是碱性水| 空腹血糖受损是什么意思| 刻舟求剑的寓意是什么| 回族不能吃什么肉| 郎才女貌是什么意思| 糙米饭是什么米| 栖字五行属什么| mL代表什么| 梦见和死人说话是什么意思| 什么情况下做冠脉ct| 易烊千玺是什么星座| 月经不来是什么原因| 微白蛋白高是什么情况| 现代是什么时候| 肚脐右边疼是什么原因| 脚后跟干裂用什么药膏| 女人大腿内侧黑是什么原因引起的| 摩什么接什么| 甲胎蛋白偏高说明什么| 大条是什么意思| 嘴角长水泡是什么原因| 吃什么去肝火效果最好| 顺势而为什么意思| 查甲功挂什么科| 12月28是什么星座| 契机是什么意思| 红色和什么颜色搭配好看| 碉堡是什么意思啊| 嗓子有粘痰什么原因| 三公是什么意思| 圣诞节在什么时候| 为什么过敏反复发作| 什么店可以买到老鼠药| 鹅口疮是什么引起的| 一加是什么牌子| 海丽汉森是什么档次| 为什么会反复发烧| 硫酸镁注射有什么作用| 舒化奶是什么意思| 工事是什么意思| 咖喱是什么材料做的| 元五行属性是什么| 唐塞是什么意思| 齐天大圣是什么级别| 牢固的近义词是什么| 两棵树是什么牌子| 麻小是什么意思| 待我长发及腰时下一句是什么| 拔牙后吃什么恢复快| 痛风吃什么药最有效| 胃胀嗳气吃什么药最有效| 尿酸高不能吃什么食物| 乌鱼是什么鱼| 什么叫心脏早搏| 什么是骨质疏松| 10度左右穿什么衣服合适| 一班三检是指什么| 0.01是什么意思| 什么是子宫憩室| 下午4点到5点是什么时辰| 女人喝甘草水有什么好处| 猪肉馅饺子配什么菜| 怀孕哭对宝宝有什么影响| 口若悬河是什么意思| 猪笼入水是什么意思| 长宽高用什么字母表示| 海拔是什么| 新加坡用什么货币| 嘴唇上长疱疹用什么药| 三级手术是什么意思| loser是什么意思| zeiss是什么意思| 东华帝君是什么神仙| 顾影自怜什么意思| 眷顾是什么意思| 急性腹泻拉水吃什么药| 喉咙细菌感染吃什么药| 什么药可以催月经来| 女性外阴痒用什么药| 大数据是什么专业| 什么东西补钾| 日本是什么时候侵略中国的| aape是什么牌子| 马加其念什么| 开五行属性是什么| 孩子爱啃指甲是什么原因| 氨纶丝是什么面料| 脚抽筋吃什么钙片好| 先锋霉素又叫什么| 什么是虚岁| 阿拉伯人属于什么人种| 西楚霸王是什么生肖| 扁平疣用什么治疗| 脑萎缩吃什么药| 失眠多梦吃什么药效果最好| 结核有什么症状| 公务员什么时候退休| 农历六月十九是什么日子| 洁面液是干什么用的| 什么是普惠性幼儿园| 压力大会有什么症状| 唐僧是什么菩萨| 什么弟什么兄| 中国海警是什么编制| 陶渊明是什么先生| 见红是什么意思| 去乙酰毛花苷又叫什么| 吃小米粥有什么好处和坏处| 尿肌酐低说明什么原因| 月经不调是什么原因造成的| 甲状腺在什么位置图片| 什么是虚拟币| 四川大学校长什么级别| 美女如云什么意思| 再创佳绩是什么意思| 安徽的特产是什么| 老鼠疣长什么样子图片| 维生素d和d3有什么区别| 秃顶是什么原因造成的| 百香果什么时候种| 无痛人流和普通人流有什么区别| 怔忡是什么意思| 女王是什么意思| 什么的点头| 一九八三年属什么生肖| 心什么神什么| 大学体检都检查什么| cba什么意思| 牙齿有黑线是什么原因| 甲状腺结节忌口什么| 药流后可以吃什么水果| 小儿发烧吃什么食物好| 1.14是什么星座| 寻麻疹不能吃什么| 腿抽筋挂什么科| 小儿便秘吃什么药| 戴尾戒是什么意思| 朝三暮四是什么生肖| 衢是什么意思| 轻度异常脑电图是什么意思| 什么布料最凉快| 中性粒细胞百分比高是什么原因| kingtis手表什么牌的| 11月生日是什么星座| 什么是紫苏| hcg是什么| 黑茶是什么茶| 什么的脊背| pes是什么材料| 肌肉僵硬是什么原因引起的| 甚嚣尘上是什么意思| 毛囊炎吃什么药最有效| 减肥喝什么咖啡| 绿色属于五行属什么| 吃醋有什么好处| 网状的蘑菇叫什么| 6月18什么星座| 古代男宠叫什么| 三言两语是什么生肖| tct是什么检查| 轻贱是什么意思| PT医学上是什么意思| 低血糖吃什么糖| 膀胱壁增厚是什么原因| 掉眉毛是什么原因| 太上皇是什么意思| 疳积有什么症状| 小腹胀痛什么原因| 痔疮看什么科| 小便解不出来是什么原因| 钟爱一生是什么意思| 清明上河图什么季节| gop是什么| 毓婷和金毓婷有什么区别| 肾脏挂什么科| 扁桃体发炎吃什么药效果最好| 疱疹是什么原因引起| 丽珠兰是什么| foryou是什么意思| 血糖高对身体有什么危害| 回南天什么意思| 单核细胞偏高是什么原因| delsey是什么牌子| 中山市有什么大学| 普通的近义词是什么| 向日葵什么时候采摘| 白头发缺什么微量元素| fda什么意思| 牙齿痛用什么药| 转氨酶偏高是什么原因引起的| 十一月十五号是什么星座| 肺部感染有什么症状| 大学记过处分有什么影响| 面粉可以做什么| 脚踝肿是什么原因| bc什么意思| 缓解是什么意思| 生命之水是什么| 加速度是什么意思| 农历4月是什么星座| 什么是腹泻| 梦见背小孩是什么意思| grace什么意思| 荷花什么季节开放| 甲状腺低回声什么意思| bella是什么意思| 淋巴细胞百分比高是什么原因| 外科和内科有什么区别| 同一首歌为什么停播了| 双肺局限性气肿是什么病| omega3是什么意思| 小case是什么意思| 纯化水是什么水| 脾湿吃什么中成药| 槟榔肝是由什么引起的| 夜不能寐什么意思| 女人三十如狼四十如虎什么意思| 意识是什么| 百度

关于做好2017年重点高校招收农村和贫困地区...

百度 斯柯达柯珞克上市销售,与定位中型SUV的柯迪亚克形成互补,提供消费者又一新的选择。

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it.[1] Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.

As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function nf(n), where n is the size of the input and f(n) is either the worst-case complexity (the maximum of the amount of resources that are needed over all inputs of size n) or the average-case complexity (the average of the amount of resources over all inputs of size n). Time complexity is generally expressed as the number of required elementary operations on an input of size n, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size n.

Resources

edit

Time

edit

The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity.

The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on any computer. This is achieved by counting the number of elementary operations that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called steps.

Bit complexity

edit

Formally, the bit complexity refers to the number of operations on bits that are needed for running an algorithm. With most models of computation, it equals the time complexity up to a constant factor. On computers, the number of operations on machine words that are needed is also proportional to the bit complexity. So, the time complexity and the bit complexity are equivalent for realistic models of computation.

Space

edit

Another important resource is the size of computer memory that is needed for running algorithms.

Communication

edit

For the class of distributed algorithms that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity. It is the necessary amount of communication between the executing parties.

Others

edit

The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.

For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a n×n integer matrix is   for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in n, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to O~(n4).

In sorting and searching, the resource that is generally considered is the number of entry comparisons. This is generally a good measure of the time complexity if data are suitably organized.

Complexity as a function of input size

edit

It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity generally increases with the size of the input, the complexity is typically expressed as a function of the size n (in bits) of the input, and therefore, the complexity is a function of n. However, the complexity of an algorithm may vary dramatically for different inputs of the same size. Therefore, several complexity functions are commonly used.

The worst-case complexity is the maximum of the complexity over all inputs of size n, and the average-case complexity is the average of the complexity over all inputs of size n (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered.

Asymptotic complexity

edit

It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of n, and this makes that, for small n, the ease of implementation is generally more interesting than a low complexity.

For these reasons, one generally focuses on the behavior of the complexity for large n, that is on its asymptotic behavior when n tends to the infinity. Therefore, the complexity is generally expressed by using big O notation.

For example, the usual algorithm for integer multiplication has a complexity of   this means that there is a constant   such that the multiplication of two integers of at most n digits may be done in a time less than   This bound is sharp in the sense that the worst-case complexity and the average-case complexity are   which means that there is a constant   such that these complexities are larger than   The radix does not appear in these complexity, as changing of radix changes only the constants   and  

Models of computation

edit

The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, it is generally implicitely assumed as being a multitape Turing machine, since several more realistic models of computation, such as random-access machines are asymptotically equivalent for most problems. It is only for very specific and difficult problems, such as integer multiplication in time   that the explicit definition of the model of computation is required for proofs.

Deterministic models

edit

A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers.

When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence.

Non-deterministic computation

edit

In a non-deterministic model of computation, such as non-deterministic Turing machines, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (as of March 2018: 21 = 3 × 7).

Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine. A problem is NP-complete if, roughly speaking, it is in NP and is not easier than any other NP problem. Many combinatorial problems, such as the Knapsack problem, the travelling salesman problem, and the Boolean satisfiability problem are NP-complete. For all these problems, the best known algorithm has exponential complexity. If any one of these problems could be solved in polynomial time on a deterministic machine, then all NP problems could also be solved in polynomial time, and one would have P = NP. As of 2017 it is generally conjectured that P ≠ NP, with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!) for interesting lengths of input.

Parallel and distributed computation

edit

Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed computing, the data transmission is done through a network and is therefore much slower.

The time needed for a computation on N processors is at least the quotient by N of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor.

The main complexity problem is thus to design algorithms such that the product of the computation time by the number of processors is as close as possible to the time needed for the same computation on a single processor.

Quantum computing

edit

A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer. This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer.

Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers.

Problem complexity (lower bounds)

edit

The complexity of a problem is the infimum of the complexities of the algorithms that may solve the problem[citation needed], including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems.

It follows that every complexity of an algorithm, that is expressed with big O notation, is also an upper bound on the complexity of the corresponding problem.

On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds.

For solving most problems, it is required to read all input data, which, normally, needs a time proportional to the size of the data. Thus, such problems have a complexity that is at least linear, that is, using big omega notation, a complexity  

The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of n polynomial equations of degree d in n indeterminates may have up to   complex solutions, if the number of solutions is finite (this is Bézout's theorem). As these solutions must be written down, the complexity of this problem is   For this problem, an algorithm of complexity   is known, which may thus be considered as asymptotically quasi-optimal.

A nonlinear lower bound of   is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is   This lower bound results from the fact that there are n! ways of ordering n objects. As each comparison splits in two parts this set of n! orders, the number of N of comparisons that are needed for distinguishing all orders must verify   which implies   by Stirling's formula.

A standard method for getting lower bounds of complexity consists of reducing a problem to another problem. More precisely, suppose that one may encode a problem A of size n into a subproblem of size f(n) of a problem B, and that the complexity of A is   Without loss of generality, one may suppose that the function f increases with n and has an inverse function h. Then the complexity of the problem B is   This is the method that is used to prove that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is   for every positive integer k.

Use in algorithm design

edit

Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected.

It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require   comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only   comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For n = 1,000,000, this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second.

Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.

See also

edit

References

edit
  1. ^ Vadhan, Salil (2011), "Computational Complexity" (PDF), in van Tilborg, Henk C. A.; Jajodia, Sushil (eds.), Encyclopedia of Cryptography and Security, Springer, pp. 235–240, doi:10.1007/978-1-4419-5906-5_442, ISBN 9781441959065
闰六月有什么讲究 子宫为什么会长息肉 念珠菌感染用什么药效果好 圆房是什么意思 下一个台风什么时候来
钮祜禄氏现在姓什么 黄疸高有什么危害 化缘是什么意思 白藜芦醇是什么东西 尿检能查出什么
晚上2点是什么时辰 胃寒吃什么食物好 头孢是什么 羊水是什么颜色的 六月六日是什么节日
支那人是什么意思 520是什么意思啊搞笑 女人喜欢什么礼物 大脑供血不足吃什么药最好 什么原因引起抽搐
beside是什么意思hcv8jop5ns3r.cn 指甲变紫色是什么原因1949doufunao.com 接风吃什么adwl56.com 体内湿气重吃什么药baiqunet.com 妈妈的哥哥的老婆叫什么hcv9jop5ns3r.cn
石钟乳是什么huizhijixie.com 孟母三迁的故事告诉我们什么道理hcv8jop3ns8r.cn 血糖高能喝什么茶hcv9jop2ns6r.cn 00年是什么命hcv9jop3ns7r.cn 五行代表什么意思youbangsi.com
酒量越来越差什么原因luyiluode.com 什么是高脂血症ff14chat.com 省委书记什么级别hcv7jop6ns7r.cn 二月出生是什么星座hcv9jop2ns7r.cn 形同陌路什么意思hcv8jop5ns8r.cn
失孤什么意思hcv9jop5ns9r.cn 月经提前10天正常吗是什么原因hcv7jop4ns6r.cn 甘油三酯高吃什么药能降下来hcv9jop8ns2r.cn 天上的星星为什么会发光hcv8jop1ns0r.cn 吉祥动物是什么生肖hcv9jop3ns3r.cn
百度