I recently heard about ternary search in which we divide an array into 3 parts and compare. Here there will be two comparisons but it reduces the array to n/3. Why don't people use this much?

3What if the array only has two elements?– Carlos MuñozAug 17 '10 at 0:11

13its a special case– mouseyAug 17 '10 at 0:13

1Surprisingly all the answers only talk about time complexity. Space complexity is just as important in many cases and Ternary trees are generally more space efficient. (And, given modern CPU architecture, space complexity often has a significant impact on actual performance).– Ian MercerJan 11 '16 at 23:16

1I appreciate this is a 6years old question. What you describe is not a ternary search, could someone please edit the title? FYI a ternary search is for unimodal functions, and binary for monotonic.– marcv81Nov 7 '16 at 9:55
Actually, people do use kary trees for arbitrary k.
This is, however, a tradeoff.
To find an element in a kary tree, you need around k*ln(N)/ln(k) operations (remember the changeofbase formula). The larger your k is, the more overall operations you need.
The logical extension of what you are saying is "why don't people use an Nary tree for N data elements?". Which, of course, would be an array.

4Btrees are kary trees that sit between arrays and binary trees, and are commonly used; there's certainly a purpose for kary trees that are larger than order 3.– Dean JAug 19 '10 at 19:57

8Btw
k/ln(k)
is minimal atk=e
(i.e.k=2.71
), so the "optimal" kary tree is eary. Binary is pretty close to that. Jul 24 '13 at 13:40 

1It is closer. But since the difference is not that big, simplicity of implementation gets more important, so binary wins. Dec 27 '16 at 19:26
A ternary search will still give you the same asymptotic complexity O(log N) search time, and adds complexity to the implementation.
The same argument can be said for why you would not want a quad search or any other higher order.

10@Nikita: Log2 N = Log3 N / Log3 2 = Constant * Log3 N = O(Log N). There is only ever a constant factor difference between any two logarithmic orders of complexity.– AkuseteAug 17 '10 at 0:17
Searching 1 billion (a US billion  1,000,000,000) sorted items would take an average of about 15 compares with binary search and about 9 compares with a ternary search  not a huge advantage. And note that each 'ternary compare' might involve 2 actual comparisons.

6The fact that a 'kary compare' in fact consists in 'k' key comparisons is possibly the most important factor to be mentioned in answering this question. +1 Aug 17 '10 at 1:15

8I had no idea there were different meaning for 'a billion' until now. Cheers.– AkuseteApr 27 '11 at 8:30

@Akusete: A billion here and a billion there... pretty soon you've got some big numbers. Apr 27 '11 at 18:30

@MichaelBurr: I'm afraid binary search in an array of 1 billion records takes an average of about 30 comparisons, not 15 ;)– chqrlieJul 4 '17 at 16:21
Wow. The top voted answers miss the boat on this one, I think.
Your CPU doesn't support ternary logic as a single operation; it breaks ternary logic into several steps of binary logic. The most optimal code for the CPU is binary logic. If chips were common that supported ternary logic as a single operation, you'd be right.
BTrees can have multiple branches at each node; a order3 Btree is ternary logic. Each step down the tree will take two comparisons instead of one, and this will probably cause it to be slower in CPU time.
BTrees, however, are pretty common. If you assume that every node in the tree will be stored somewhere separately on disk, you're going to spend most of your time reading from disk... and the CPU won't be a bottleneck, but the disk will be. So you take a Btree with 100,000 children per node, or whatever else will barely fit into one block of memory. Btrees with that kind of branching factor would rarely be more than three nodes high, and you'd only have three disk reads  three stops at a bottleneck  to search an enormous, enormous dataset.
Reviewing:
 Ternary trees aren't supported by hardware, so they run less quickly.
 Btress with orders much, much, much higher than 3 are common for diskoptimization of large datasets; once you've gone past 2, go higher than 3.

4Ternary trees have three
jmp
operations (if less go leftJB
, if more go rightJA
, if equal go downJE
). A binary tree has two binary operations (if less go leftJB
, if more go rightJA
). Whether the processor has binary or ternary architectures has nothing to do with it. Only the instructionsJA
,JB
andJE
on an IA32 architecture.– AbelNov 15 '11 at 22:02
The only way a ternary search can be faster than a binary search is if a 3way partition determination can be done for less than about 1.55 times the cost of a 2way comparison. If the items are stored in a sorted array, the 3way determination will on average be 1.66 times as expensive as a 2way determination. If information is stored in a tree, however, the cost to fetch information is high relative to the cost of actually comparing, and cache locality means the cost of randomly fetching a pair of related data is not much worse than the cost of fetching a single datum, a ternary or nway tree may improve efficiency greatly.
What makes you think Ternary search should be faster?
Average number of comparisons:
in ternary search = ((1/3)*1 + (2/3)*2) * ln(n)/ln(3) ~ 1.517*ln(n)
in binary search = 1 * ln(n)/ln(2) ~ 1.443*ln(n).
Worst number of comparisons:
in ternary search = 2 * ln(n)/ln(3) ~ 1.820*ln(n)
in binary search = 1 * ln(n)/ln(2) ~ 1.443*ln(n).
So it looks like ternary search is worse.

@Moron: Would it be
2 * log(n)/log3
? I do not always need to make 2 comparisons at a level. For example  if I have[..a..b..]
, and I dox < a
is true, then there is no need to compare withb
.– LazerAug 17 '10 at 4:43 
@Lazer: He did say 'worst case'. Assuming a balanced tree, and an even distribution of lookups (shakey assumption), you would need 1.5 lookups per node on average*.– AkuseteAug 17 '10 at 5:43

I'm not sure if we are interested in worst case, but rather in, how long does it take on average. Feb 13 '12 at 11:28


This can't be right: the worst case is not the average case for binary search, because you only go down to the last iteration if you didn't find it on previous iterations. Half the time, you won't make it to the last iteration, so something is certainly amiss here.– corsiKaMar 5 '14 at 15:57
Also, note that this sequence generalizes to linear search if we go on
Binary search
Ternary search
...
...
nary search ≡ linear search
So, in an nary search, we will have "one only COMPARE" which might take upto n actual comparisons.

1Except there's some very happy middle ground between those two extremes. See my answer, maybe?– Dean JAug 19 '10 at 19:58

@Dean J: Yes, you are right. If there was inbuilt support such that there could be a atomic ternary compare, ternary search would have been the choice. But there are a lot of limitations in implementing ternary logic at the hardware level itself (one reason why it isn't popular). I remember reading somewhere that
e
is the most economical base theoretically though it is still a problem to actually do that in hardware. So, until we have machines that support ternary logic at the hardware level, binary is the way to go.– LazerAug 20 '10 at 7:59 
1The interesting one is that binary search is optimal for CPU cycles, but not optimal if the tree is large enough that some of it is paged to disk; in that case, a nary tree with a huge branching factor actually works much better.– Dean JAug 20 '10 at 13:33
"Terinary" (ternary?) search is more efficient in the best case, which would involve searching for the first element (or perhaps the last, depending on which comparison you do first). For elements farther from the end you're checking first, while two comparisons would narrow the array by 2/3 each time, the same two comparisons with binary search would narrow the search space by 3/4.
Add to that, binary search is simpler. You just compare and get one half or the other, rather than compare, if less than get the first third, else compare, if less than get the second third, else get the last third.
Ternary search can be effectively used on parallel architectures  FPGAs and ASICs. For example if internal FPGA memory required for search is less than half of the FPGA resource, you can make a duplicate memory block. This would allow to simultaneously access two different memory addresses and do all comparisons in a single clock cycle. This is one of the reasons why 100MHz FPGA can sometimes outperform the 4GHz CPU :)
Here's some random experimental evidence that I haven't vetted at all showing that it's slower than binary search.

1Permission denied :"This blog is open to invited readers only ehon.blogspot.com It doesn't look like you have been invited to read this blog. If you think this is a mistake, you might want to contact the blog author and request an invitation." Feb 28 '14 at 16:33
Almost all textbooks and websites on binary search trees do not really talk about binary trees! They show you ternary search trees! True binary trees store data in their leaves not internal nodes (except for keys to navigate). Some call these leaf trees and make the distinction between node trees shown in textbooks:
J. Nievergelt, C.K. Wong: Upper Bounds for the Total Path Length of Binary Trees, Journal ACM 20 (1973) 1–6.
The following about this is from Peter Brass's book on data structures.
2.1 Two Models of Search Trees
In the outline just given, we supressed an important point that at first seems trivial, but indeed it leads to two different models of search trees, either of which can be combined with much of the following material, but one of which is strongly preferable.
If we compare in each node the query key with the key contained in the node and follow the left branch if the query key is smaller and the right branch if the query key is larger, then what happens if they are equal? The two models of search trees are as follows:
Take left branch if query key is smaller than node key; otherwise take the right branch, until you reach a leaf of the tree. The keys in the interior node of the tree are only for comparison; all the objects are in the leaves.
Take left branch if query key is smaller than node key; take the right branch if the query key is larger than the node key; and take the object contained in the node if they are equal.
This minor point has a number of consequences:
{ In model 1, the underlying tree is a binary tree, whereas in model 2, each tree node is really a ternary node with a special middle neighbor.
{ In model 1, each interior node has a left and a right subtree (each possibly a leaf node of the tree), whereas in model 2, we have to allow incomplete nodes, where left or right subtree might be missing, and only the comparison object and key are guaranteed to exist.
So the structure of a search tree of model 1 is more regular than that of a tree of model 2; this is, at least for the implementation, a clear advantage.
{ In model 1, traversing an interior node requires only one comparison, whereas in model 2, we need two comparisons to check the three possibilities.
Indeed, trees of the same height in models 1 and 2 contain at most approximately the same number of objects, but one needs twice as many comparisons in model 2 to reach the deepest objects of the tree. Of course, in model 2, there are also some objects that are reached much earlier; the object in the root is found with only two comparisons, but almost all objects are on or near the deepest level.
Theorem. A tree of height h and model 1 contains at most 2^h objects. A tree of height h and model 2 contains at most 2^h+1 − 1 objects.
This is easily seen because the tree of height h has as left and right subtrees a tree of height at most h − 1 each, and in model 2 one additional object between them.
{ In model 1, keys in interior nodes serve only for comparisons and may reappear in the leaves for the identification of the objects. In model 2, each key appears only once, together with its object.
It is even possible in model 1 that there are keys used for comparison that do not belong to any object, for example, if the object has been deleted. By conceptually separating these functions of comparison and identification, this is not surprising, and in later structures we might even need to define artificial tests not corresponding to any object, just to get a good division of the search space. All keys used for comparison are necessarily distinct because in a model 1 tree, each interior node has nonempty left and right subtrees. So each key occurs at most twice, once as comparison key and once as identification key in the leaf.
Model 2 became the preferred textbook version because in most textbooks the distinction between object and its key is not made: the key is the object. Then it becomes unnatural to duplicate the key in the tree structure. But in all real applications, the distinction between key and object is quite important. One almost never wishes to keep track of just a set of numbers; the numbers are normally associated with some further information, which is often much larger than the key itself.
You may have heard ternary search being used in those riddles that involve weighing things on scales. Those scales can return 3 answers: left is lighter, both are the same, or left is heavier. So in a ternary search, it only takes 1 comparison. However, computers use boolean logic, which only has 2 answers. To do the ternary search, you'd actually have to do 2 comparisons instead of 1. I guess there are some cases where this is still faster as earlier posters mentioned, but you can see that ternary search isn't always better, and it's more confusing and less natural to implement on a computer.
Theoretically the minimum of k/ln(k)
is achieved at e and since 3 is closer to e than 2 it requires less comparisons. You can check that 3/ln(3) = 2.73..
and 2/ln(2) = 2.88..
The reason why binary search could be faster is that the code for it will have less branches and will run faster on modern CPUs.
I have just posted a blog about the ternary search and I have shown some results. I have also provided some initial level implementations on my git repo I totally agree with every one about the theory part of the ternary search but why not give it a try? As per the implementation that part is easy enough if you have three years of coding experience. I found that if you have huge data set and you need to search it many times ternary search has an advantage. If you think you can do better with a ternary search go for it.
Although you get the same bigO complexity (ln n) in both search trees, the difference is in the constants. You have to do more comparisons for a ternary search tree at each level. So the difference boils down to k/ln(k) for a kary search tree. This has a minimum value at e=2.7 and k=2 provides the optimal result.