# Concept:

• Implement QuickSort to partition array into two parts
• Choose “Randomized pivot" (This article use left as pivot for convenience)
• Partition the array
• Divide the elements in the left part which are smaller than the pivot
• Divide the elements in the right part which are larger than the pivot
• If the order statistic is larger than the index(i-1), than desired order statistic must lays in the right part
• If the order statistic is smaller than the index(i-1), than desired order statistic must lays in the left part

# Time Complexity

$O(n)$

# Conclution

Randomized Selection provides a simple and a fast way to find the ith order statictic.
Otherwise, “Deterministic selection" is another solution.

# Find $i$ th statistic order element $X_{i}$

```#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;string.h&gt;

void swap(int *a,int *b){
int temp;
temp = *a ;
*a = *b;
*b = temp;
}
int RSelect(int Array[],int len,int order){
int i = 0,j = 0,k;
int pivot;
int* leftHalve;
int* rightHalve;

if(len==1) return Array[0];

// Partition
pivot = Array[0]; // Always Select left most element as pivot
i = 1 ;
for(j = 1 ; j &lt; len ; j++){
if(Array[j]&lt;pivot){
swap(&amp;Array[j],&amp;Array[i]);
i++;
}
}

swap(&amp;Array[0],&amp;Array[i-1]); // Swap the pivot with division index i-1

if(i-1==order) return Array[i-1]; // After swap, the order statistic is exactly i-1

// Divide
if(i-1&gt;order){
// Order statistic index is in the left part
leftHalve = (int *)calloc(i-1+1,sizeof(int));
for(k=0;k&lt;i-1+1;k++){
leftHalve[k] = Array[k];
}
RSelect(leftHalve,i-1+1,order);
}else{
// Order statistic index is in the right part
rightHalve = (int *)calloc(len-i,sizeof(int));
for(k=0;k&lt;len-i;k++){
rightHalve[k] = Array[i+k];
}
RSelect(rightHalve,len-i,order-i);
}
}
```

# Fees in RWTH Aachen University

Monthly Expenditure Minimum Costs per Month
Accommodation € 160
Food and general expenses € 350
Health insurance approx. € 80
University fees (social security contribution, semester ticket), approx. €200 per semester approx. € 38
Book and materials € 70 (may be significantly higher, according to subject studied)
Total approx. € 700 per month (€ 8,400 per year)

## C# events vs. delegates

Event invocation
Furthermore, an event can only be invoked from within the class that declared it, whereas a delegate field can be invoked by whoever has access to it. For example:

``````[sourcecode language="csharp"]
using System;
namespace EventAndDelegate
{
delegate void MsgHandler(string s);

class Class1
{
public static event MsgHandler msgNotifier;
public static MsgHandler msgNotifier2;

static void Main(string[] args)
{
new Class2().test();
}
}

class Class2
{
public void test()
{
Class1.msgNotifier("test"); // error CS0070: The event 'EventAndDelegate.Class1.msgNotifier' can only appear on the left hand side of += or -= (except when used from within the type 'EventAndDelegate.Class1')
Class1.msgNotifier2("test2"); // compiles fine
}
}
}
[/sourcecode]``````

This restriction on invocations is quite strong. Even derived classes from the class declaring the event aren’t allowed to fire the event. A way to deal with this is to have a protected virtual method to trigger the event.

Conclusion
We have seen that the event keyword is a modifier for a delegate declaration that allows it to be included in an interface, constraints it invocation from within the class that declares it, provides it with a pair of customizable accessors (addand remove) and forces the signature of the delegate (when used within the .NET framework).

# 2 PEPs 252 and 253: Type and Class Changes

The largest and most far-reaching changes in Python 2.2 are to Python’s model of objects and classes. The changes should be backward compatible, so it’s likely that your code will continue to run unchanged, but the changes provide some amazing new capabilities. Before beginning this, the longest and most complicated section of this article, I’ll provide an overview of the changes and offer some comments.

A long time ago I wrote a Web page (http://www.amk.ca/python/writing/warts.html) listing flaws in Python’s design. One of the most significant flaws was that it’s impossible to subclass Python types implemented in C. In particular, it’s not possible to subclass built-in types, so you can’t just subclass, say, lists in order to add a single useful method to them. The UserList module provides a class that supports all of the methods of lists and that can be subclassed further, but there’s lots of C code that expects a regular Python list and won’t accept a UserList instance.

Python 2.2 fixes this, and in the process adds some exciting new capabilities. A brief summary:

• You can subclass built-in types such as lists and even integers, and your subclasses should work in every place that requires the original type.
• It’s now possible to define static and class methods, in addition to the instance methods available in previous versions of Python.
• It’s also possible to automatically call methods on accessing or setting an instance attribute by using a new mechanism called properties. Many uses of__getattr__ can be rewritten to use properties instead, making the resulting code simpler and faster. As a small side benefit, attributes can now have docstrings, too.
• The list of legal attributes for an instance can be limited to a particular set using slots, making it possible to safeguard against typos and perhaps make more optimizations possible in future versions of Python.

Some users have voiced concern about all these changes. Sure, they say, the new features are neat and lend themselves to all sorts of tricks that weren’t possible in previous versions of Python, but they also make the language more complicated. Some people have said that they’ve always recommended Python for its simplicity, and feel that its simplicity is being lost.

Personally, I think there’s no need to worry. Many of the new features are quite esoteric, and you can write a lot of Python code without ever needed to be aware of them. Writing a simple class is no more difficult than it ever was, so you don’t need to bother learning or teaching them unless they’re actually needed. Some very complicated tasks that were previously only possible from C will now be possible in pure Python, and to my mind that’s all for the better.

I’m not going to attempt to cover every single corner case and small change that were required to make the new features work. Instead this section will paint only the broad strokes. See section 2.5, “Related Links", for further sources of information about Python 2.2’s new object model.

## 2.1 Old and New Classes

First, you should know that Python 2.2 really has two kinds of classes: classic or old-style classes, and new-style classes. The old-style class model is exactly the same as the class model in earlier versions of Python. All the new features described in this section apply only to new-style classes. This divergence isn’t intended to last forever; eventually old-style classes will be dropped, possibly in Python 3.0.

So how do you define a new-style class? You do it by subclassing an existing new-style class. Most of Python’s built-in types, such as integers, lists, dictionaries, and even files, are new-style classes now. A new-style class named object, the base class for all built-in types, has also been added so if no built-in type is suitable, you can just subclass object:

```class C(object):
def __init__ (self):
...
...
```

This means that class statements that don’t have any base classes are always classic classes in Python 2.2. (Actually you can also change this by setting a module-level variable named __metaclass__ — see PEP 253 for the details — but it’s easier to just subclass object.)

The type objects for the built-in types are available as built-ins, named using a clever trick. Python has always had built-in functions named int(), float(), andstr(). In 2.2, they aren’t functions any more, but type objects that behave as factories when called.

```>>> int
<type 'int'>
>>> int('123')
123
```

To make the set of types complete, new type objects such as dict and file have been added. Here’s a more interesting example, adding a lock() method to file objects:

```class LockableFile(file):
def lock (self, operation, length=0, start=0, whence=0):
import fcntl
return fcntl.lockf(self.fileno(), operation,
length, start, whence)
```

The now-obsolete posixfile module contained a class that emulated all of a file object’s methods and also added a lock() method, but this class couldn’t be passed to internal functions that expected a built-in file, something which is possible with our new LockableFile.

## 2.2 Descriptors

In previous versions of Python, there was no consistent way to discover what attributes and methods were supported by an object. There were some informal conventions, such as defining __members__ and __methods__ attributes that were lists of names, but often the author of an extension type or a class wouldn’t bother to define them. You could fall back on inspecting the __dict__ of an object, but when class inheritance or an arbitrary __getattr__ hook were in use this could still be inaccurate.

The one big idea underlying the new class model is that an API for describing the attributes of an object using descriptors has been formalized. Descriptors specify the value of an attribute, stating whether it’s a method or a field. With the descriptor API, static methods and class methods become possible, as well as more exotic constructs.

Attribute descriptors are objects that live inside class objects, and have a few attributes of their own:

• __name__ is the attribute’s name.
• __doc__ is the attribute’s docstring.
• __get__(object) is a method that retrieves the attribute value from object.
• __set__(object, value) sets the attribute on object to value.
• __delete__(object, value) deletes the value attribute of object.

For example, when you write `obj.x`, the steps that Python actually performs are:

```descriptor = obj.__class__.x
descriptor.__get__(obj)
```

For methods, descriptor.__get__ returns a temporary object that’s callable, and wraps up the instance and the method to be called on it. This is also why static methods and class methods are now possible; they have descriptors that wrap up just the method, or the method and the class. As a brief explanation of these new kinds of methods, static methods aren’t passed the instance, and therefore resemble regular functions. Class methods are passed the class of the object, but not the object itself. Static and class methods are defined like this:

```class C(object):
def f(arg1, arg2):
...
f = staticmethod(f)

def g(cls, arg1, arg2):
...
g = classmethod(g)
```

The staticmethod() function takes the function f, and returns it wrapped up in a descriptor so it can be stored in the class object. You might expect there to be special syntax for creating such methods (`def static f()`, `defstatic f()`, or something like that) but no such syntax has been defined yet; that’s been left for future versions of Python.

More new features, such as slots and properties, are also implemented as new kinds of descriptors, and it’s not difficult to write a descriptor class that does something novel. For example, it would be possible to write a descriptor class that made it possible to write Eiffel-style preconditions and postconditions for a method. A class that used this feature might be defined like this:

```from eiffel import eiffelmethod

class C(object):
def f(self, arg1, arg2):
# The actual function
...
def pre_f(self):
# Check preconditions
...
def post_f(self):
# Check postconditions
...

f = eiffelmethod(f, pre_f, post_f)
```

Note that a person using the new eiffelmethod() doesn’t have to understand anything about descriptors. This is why I think the new features don’t increase the basic complexity of the language. There will be a few wizards who need to know about it in order to write eiffelmethod() or the ZODB or whatever, but most users will just write code on top of the resulting libraries and ignore the implementation details.

## 2.3 Multiple Inheritance: The Diamond Rule

Multiple inheritance has also been made more useful through changing the rules under which names are resolved. Consider this set of classes (diagram taken fromPEP 253 by Guido van Rossum):

```                class A:
^ ^  def save(self): ...
/   \
/     \
/       \
/         \
class B     class C:
^         ^  def save(self): ...
\       /
\     /
\   /
\ /
class D
```

The lookup rule for classic classes is simple but not very smart; the base classes are searched depth-first, going from left to right. A reference to D.save will search the classes D, B, and then A, where save() would be found and returned. C.save() would never be found at all. This is bad, because if C‘s save() method is saving some internal state specific to C, not calling it will result in that state never getting saved.

New-style classes follow a different algorithm that’s a bit more complicated to explain, but does the right thing in this situation. (Note that Python 2.3 changes this algorithm to one that produces the same results in most cases, but produces more useful results for really complicated inheritance graphs.)

1. List all the base classes, following the classic lookup rule and include a class multiple times if it’s visited repeatedly. In the above example, the list of visited classes is [D, B, A, C, A].
2. Scan the list for duplicated classes. If any are found, remove all but one occurrence, leaving the last one in the list. In the above example, the list becomes [D,B, C, A] after dropping duplicates.

Following this rule, referring to D.save() will return C.save(), which is the behaviour we’re after. This lookup rule is the same as the one followed by Common Lisp. A new built-in function, super(), provides a way to get at a class’s superclasses without having to reimplement Python’s algorithm. The most commonly used form will be super(class, obj), which returns a bound superclass object (not the actual class object). This form will be used in methods to call a method in the superclass; for example, D‘s save() method would look like this:

```class D:
def save (self):
# Call superclass .save()
super(D, self).save()
# Save D's private information here
...
```

super() can also return unbound superclass objects when called as super(class) or super(class1, class2), but this probably won’t often be useful.

## 2.4 Attribute Access

A fair number of sophisticated Python classes define hooks for attribute access using __getattr__; most commonly this is done for convenience, to make code more readable by automatically mapping an attribute access such as `obj.parent` into a method call such as `obj.get_parent()`. Python 2.2 adds some new ways of controlling attribute access.

First, __getattr__(attr_name) is still supported by new-style classes, and nothing about it has changed. As before, it will be called when an attempt is made to access `obj.foo` and no attribute named “foo" is found in the instance’s dictionary.

New-style classes also support a new method, __getattribute__(attr_name). The difference between the two methods is that __getattribute__ is alwayscalled whenever any attribute is accessed, while the old __getattr__ is only called if “foo" isn’t found in the instance’s dictionary.

However, Python 2.2’s support for properties will often be a simpler way to trap attribute references. Writing a __getattr__ method is complicated because to avoid recursion you can’t use regular attribute accesses inside them, and instead have to mess around with the contents of __dict__. __getattr__ methods also end up being called by Python when it checks for other methods such as __repr__ or __coerce__, and so have to be written with this in mind. Finally, calling a function on every attribute access results in a sizable performance loss.

property is a new built-in type that packages up three functions that get, set, or delete an attribute, and a docstring. For example, if you want to define a sizeattribute that’s computed, but also settable, you could write:

```class C(object):
def get_size (self):
result = ... computation ...
return result
def set_size (self, size):
... compute something based on the size
and set internal state appropriately ...

# Define a property.  The 'delete this attribute'
# method is defined as None, so the attribute
# can't be deleted.
size = property(get_size, set_size,
None,
"Storage size of this instance")
```

That is certainly clearer and easier to write than a pair of __getattr__/__setattr__ methods that check for the size attribute and handle it specially while retrieving all other attributes from the instance’s __dict__. Accesses to size are also the only ones which have to perform the work of calling a function, so references to other attributes run at their usual speed.

Finally, it’s possible to constrain the list of attributes that can be referenced on an object using the new __slots__ class attribute. Python objects are usually very dynamic; at any time it’s possible to define a new attribute on an instance by just doing `obj.new_attr=1`. A new-style class can define a class attribute named__slots__ to limit the legal attributes to a particular set of names. An example will make this clear:

```>>> class C(object):
...     __slots__ = ('template', 'name')
...
>>> obj = C()
>>> print obj.template
None
>>> obj.template = 'Test'
>>> print obj.template
Test
>>> obj.newattr = None
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'C' object has no attribute 'newattr'
```

Note how you get an AttributeError on the attempt to assign to an attribute not listed in __slots__.

This section has just been a quick overview of the new features, giving enough of an explanation to start you programming, but many details have been simplified or ignored. Where should you go to get a more complete picture?

http://www.python.org/2.2/descrintro.html is a lengthy tutorial introduction to the descriptor features, written by Guido van Rossum. If my description has whetted your appetite, go read this tutorial next, because it goes into much more detail about the new features while still remaining quite easy to read.

Next, there are two relevant PEPs, PEP 252 and PEP 253. PEP 252 is titled “Making Types Look More Like Classes", and covers the descriptor API. PEP 253 is titled “Subtyping Built-in Types", and describes the changes to type objects that make it possible to subtype built-in objects. PEP 253 is the more complicated PEP of the two, and at a few points the necessary explanations of types and meta-types may cause your head to explode. Both PEPs were written and implemented by Guido van Rossum, with substantial assistance from the rest of the Zope Corp. team.

Finally, there’s the ultimate authority: the source code. Most of the machinery for the type handling is in Objects/typeobject.c, but you should only resort to it after all other avenues have been exhausted, including posting a question to python-list or python-dev.

## excel formating

Preserve the original `input.xls` formatting when you open it:

``````from xlrd import open_workbook

input_wb = open_workbook('input.xls', formatting_info=True)``````

Create a new workbook based on this template:

``````from xlutils.copy import copy as copy_workbook

output_wb = copy_workbook(input_wb)``````

Define some new cell styles:

``````from xlwt import easyxf

red_background = easyxf("pattern: pattern solid, fore_color red;")
black_with_white_font = easyxf('pattern: pattern solid, fore_color black; font: color-index white, bold on;")``````

``````input_ws = input_wb.sheet_by_name('StackOverflow')
output_ws = output_wb.get_sheet(0)

for rindex in range(0, input_ws.nrows):
for cindex in range(0, input_ws.ncols):
input_cell = input_ws.cell(rindex, cindex)
if input_cell.value[ input_cell.value.rfind('.'): ] == 'pf':
output_ws.write(rindex, cindex, input_cell.value, red_background)
elif input_cell.value.find('deleted') >= 0:
output_ws.write(rindex, cindex, input_cell.value, black_with_white_font)
else:
pass  # we don't need to modify it``````

``output_wb.save('output.xls')``

Using the above example, unmodified cells should have their original formatting intact.

Should you need to alter the cell content AND would like to preserve the original formatting (i.e. NOT use your custom `easyxf` instance), you may use this snippet:

``````def changeCell(worksheet, row, col, text):
""" Changes a worksheet cell text while preserving formatting """
previousCell = worksheet._Worksheet__rows.get(row)._Row__cells.get(col)
worksheet.write(row, col, text)
newCell = worksheet._Worksheet__rows.get(row)._Row__cells.get(col)
newCell.xf_idx = previousCell.xf_idx

# ...

changeCell(worksheet_instance, 155, 2, "New Value")``````

For the comparisons, you can use the string methods `find` and `rfind` (which searches from the right). They return the index of the position of the substring within the string. They return `-1` if the substring is not found. Ergo, you see above `input_cell.value.find('deleted') >= 0` to evaluate whether or not the substring ‘deleted’ exists. For the `.pf` comparison, I used `rfind` as well as something in Python called slicing.

## python yield iterate

To understand what `yield` does, you must understand what generators are. And before generators come iterables.

## Iterables

When you create a list, you can read its items one by one. Reading its items one by one is called iteration:

``````>>> mylist = [1, 2, 3]
>>> for i in mylist:
...    print(i)
1
2
3``````

`mylist` is an iterable. When you use a list comprehension, you create a list, and so an iterable:

``````>>> mylist = [x*x for x in range(3)]
>>> for i in mylist:
...    print(i)
0
1
4``````

Everything you can use “`for... in...`" on is an iterable: `lists`, `strings`, files…

These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.

## Generators

Generators are iterators, but you can only iterate over them once. It’s because they do not store all the values in memory, they generate the values on the fly:

``````>>> mygenerator = (x*x for x in range(3))
>>> for i in mygenerator:
...    print(i)
0
1
4``````

It is just the same except you used `()` instead of `[]`. BUT, you cannot perform `for i in mygenerator` a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.

## Yield

`Yield` is a keyword that is used like `return`, except the function will return a generator.

``````>>> def createGenerator():
...    mylist = range(3)
...    for i in mylist:
...        yield i*i
...
>>> mygenerator = createGenerator() # create a generator
>>> print(mygenerator) # mygenerator is an object!
<generator object createGenerator at 0xb7555c34>
>>> for i in mygenerator:
...     print(i)
0
1
4``````

Here it’s a useless example, but it’s handy when you know your function will return a huge set of values that you will only need to read once.

To master `yield`, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky 🙂

Then, your code will be run each time the `for` uses the generator.

Now the hard part:

The first time the `for` calls the generator object created from your function, it will run the code in your function from the beginning until it hits `yield`, then it’ll return the first value of the loop. Then, each other call will run the loop you have written in the function one more time, and return the next value, until there is no value to return.

The generator is considered empty once the function runs but does not hit yield anymore. It can be because the loop had come to an end, or because you do not satisfy a `"if/else"` anymore.

Generator:

``````# Here you create the method of the node object that will return the generator
def node._get_child_candidates(self, distance, min_dist, max_dist):

# Here is the code that will be called each time you use the generator object:

# If there is still a child of the node object on its left
# AND if distance is ok, return the next child
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild

# If there is still a child of the node object on its right
# AND if distance is ok, return the next child
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild

# If the function arrives here, the generator will be considered empty
# there is no more than two values: the left and the right children``````

Caller:

``````# Create an empty list and a list with the current object reference
result, candidates = list(), [self]

# Loop on candidates (they contain only one element at the beginning)
while candidates:

# Get the last candidate and remove it from the list
node = candidates.pop()

# Get the distance between obj and the candidate
distance = node._get_dist(obj)

# If distance is ok, then you can fill the result
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)

# Add the children of the candidate in the candidates list
# so the loop will keep running until it will have looked
# at all the children of the children of the children, etc. of the candidate
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))

return result``````

This code contains several smart parts:

• The loop iterates on a list but the list expands while the loop is being iterated 🙂 It’s a concise way to go through all these nested data even if it’s a bit dangerous since you can end up with an infinite loop. In this case, `candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))` exhausts all the values of the generator, but `while` keeps creating new generator objects which will produce different values from the previous ones since it’s not applied on the same node.
• The `extend()` method is a list object method that expects an iterable and adds its values to the list.

Usually we pass a list to it:

``````>>> a = [1, 2]
>>> b = [3, 4]
>>> a.extend(b)
>>> print(a)
[1, 2, 3, 4]``````

But in your code it gets a generator, which is good because:

1. You don’t need to read the values twice.
2. You can have a lot of children and you don’t want them all stored in memory.

And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples and generators! This is called duck typing and is one of the reason why Python is so cool. But this is another story, for another question…

You can stop here, or read a little bit to see a advanced use of generator:

## Controlling a generator exhaustion

``````>>> class Bank(): # let's create a bank, building ATMs
...    crisis = False
...    def create_atm(self):
...        while not self.crisis:
...            yield "\$100"
>>> hsbc = Bank() # when everything's ok the ATM gives you as much as you want
>>> corner_street_atm = hsbc.create_atm()
>>> print(corner_street_atm.next())
\$100
>>> print(corner_street_atm.next())
\$100
>>> print([corner_street_atm.next() for cash in range(5)])
['\$100', '\$100', '\$100', '\$100', '\$100']
>>> hsbc.crisis = True # crisis is coming, no more money!
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> wall_street_atm = hsbc.create_atm() # it's even true for new ATMs
>>> print(wall_street_atm.next())
<type 'exceptions.StopIteration'>
>>> hsbc.crisis = False # trouble is, even post-crisis the ATM remains empty
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> brand_new_atm = hsbc.create_atm() # build a new one to get back in business
>>> for cash in brand_new_atm:
...    print cash
\$100
\$100
\$100
\$100
\$100
\$100
\$100
\$100
\$100
...``````

It can be useful for various things like controlling access to a resource.

The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one liner? `Map / Zip`without creating another list?

Then just `import itertools`.

An example? Let’s see the possible orders of arrival for a 4 horse race:

``````>>> horses = [1, 2, 3, 4]
>>> races = itertools.permutations(horses)
>>> print(races)
<itertools.permutations object at 0xb754f1dc>
>>> print(list(itertools.permutations(horses)))
[(1, 2, 3, 4),
(1, 2, 4, 3),
(1, 3, 2, 4),
(1, 3, 4, 2),
(1, 4, 2, 3),
(1, 4, 3, 2),
(2, 1, 3, 4),
(2, 1, 4, 3),
(2, 3, 1, 4),
(2, 3, 4, 1),
(2, 4, 1, 3),
(2, 4, 3, 1),
(3, 1, 2, 4),
(3, 1, 4, 2),
(3, 2, 1, 4),
(3, 2, 4, 1),
(3, 4, 1, 2),
(3, 4, 2, 1),
(4, 1, 2, 3),
(4, 1, 3, 2),
(4, 2, 1, 3),
(4, 2, 3, 1),
(4, 3, 1, 2),
(4, 3, 2, 1)]``````

## Understanding the inner mechanisms of iteration

Iteration is a process implying iterables (implementing the `__iter__()` method) and iterators (implementing the `__next__()` method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.