python excel writer and pandas data frame

http://xlsxwriter.readthedocs.org/working_with_pandas.html

廣告

python parallel programming

An introduction to parallel programming

using Python’s multiprocessing module

— written by Sebastian Raschka June 20, 2014

http://platform.twitter.com/widgets/tweet_button.c633b87376883931e7436b93bb46a699.en.html#_=1451203349839&dnt=false&id=twitter-widget-0&lang=en&original_referer=http%3A%2F%2Fsebastianraschka.com%2FArticles%2F2014_multiprocessing_intro.html&size=m&text=An%20introduction%20to%20parallel%20programming%20using%20Python%27s%20multiprocessing%20module&type=share&url=http%3A%2F%2Fsebastianraschka.com%2FArticles%2F2014_multiprocessing_intro.html&via=rasbt

CPUs with multiple cores have become the standard in the recent development of modern computer architectures and we can not only find them in supercomputer facilities but also in our desktop machines at home, and our laptops; even Apple’s iPhone 5S got a 1.3 Ghz Dual-core processor in 2013.

However, the default Python interpreter was designed with simplicity in mind and has a thread-safe mechanism, the so-called “GIL" (Global Interpreter Lock). In order to prevent conflicts between threads, it executes only one statement at a time (so-called serial processing, or single-threading).

In this introduction to Python’s multiprocessing module, we will see how we can spawn multiple subprocesses to avoid some of the GIL’s disadvantages.

Table of Contents

This table of contents was created by markdown-toclify

Multi-Threading vs. Multi-Processing

Depending on the application, two common approaches in parallel programming are either to run code via threads or multiple processes, respectively. If we submit “jobs" to different threads, those jobs can be pictured as “sub-tasks" of a single process and those threads will usually have access to the same memory areas (i.e., shared memory). This approach can easily lead to conflicts in case of improper synchronization, for example, if processes are writing to the same memory location at the same time.

A safer approach (although it comes with an additional overhead due to the communication overhead between separate processes) is to submit multiple processes to completely separate memory locations (i.e., distributed memory): Every process will run completely independent from each other.

Here, we will take a look at Python’s multiprocessing module and how we can use it to submit multiple processes that can run independently from each other in order to make best use of our CPU cores.

Introduction to the multiprocessing module

The multiprocessing module in Python’s Standard Library has a lot of powerful features. If you want to read about all the nitty-gritty tips, tricks, and details, I would recommend to use the official documentationas an entry point.

In the following sections, I want to provide a brief overview of different approaches to show how themultiprocessing module can be used for parallel programming.

The Process class

The most basic approach is probably to use the Process class from the multiprocessing module.
Here, we will use a simple random string generator to generate 4 random string in parallel processes.

import multiprocessing as mp
import random
import string

# Define an output queue
output = mp.Queue()

# define a example function
def rand_string(length, output):
    """ Generates a random string of numbers, lower- and uppercase chars. """
    rand_str = ''.join(random.choice(
                    string.ascii_lowercase
                    + string.ascii_uppercase
                    + string.digits)
               for i in range(length))
    output.put(rand_str)

# Setup a list of processes that we want to run
processes = [mp.Process(target=rand_string, args=(5, output)) for x in range(4)]

# Run processes
for p in processes:
    p.start()

# Exit the completed processes
for p in processes:
    p.join()

# Get process results from the output queue
results = [output.get() for p in processes]

print(results)
['yzQfA', 'PQpqM', 'SHZYV', 'PSNkD']

How to retrieve results in a particular order

The order of the obtained results does not necessarily have to match the order of the processes (in theprocesses list). Since we eventually use the .get() method to retrieve the results from the Queuesequentially, the order in which the processes finished determines the order of our results.
E.g., if the second process has finished just before the first process, the order of the strings in theresults list could have also been ['PQpqM', 'yzQfA', 'SHZYV', 'PSNkD'] instead of['yzQfA', 'PQpqM', 'SHZYV', 'PSNkD']

If our application required us to retrieve results in a particular order, one possibility would be to refer to the processes’ ._identity attribute. In this case, we could also simply use the values from ourrange object as position argument. The modified code would be:

# define a example function
def rand_string(length, pos, output):
    """ Generates a random string of numbers, lower- and uppercase chars. """
    rand_str = ''.join(random.choice(
                    string.ascii_lowercase
                    + string.ascii_uppercase
                    + string.digits)
                for i in range(length))
    output.put((pos, rand_str))

# Setup a list of processes that we want to run
processes = [mp.Process(target=rand_string, args=(5, x, output)) for x in range(4)]

And the retrieved results would be tuples, for example,[(0, 'KAQo6'), (1, '5lUya'), (2, 'nj6Q0'), (3, 'QQvLr')]
or [(1, '5lUya'), (3, 'QQvLr'), (0, 'KAQo6'), (2, 'nj6Q0')]

To make sure that we retrieved the results in order, we could simply sort the results and optionally get rid of the position argument:

results.sort()
results = [r[1] for r in results]
['KAQo6', '5lUya', 'nj6Q0', 'QQvLr']

A simpler way to to maintain an ordered list of results is to use the Pool.apply and Pool.mapfunctions which we will discuss in the next section.

The Pool class

Another and more convenient approach for simple parallel-processing tasks is provided by the Poolclass.

There are four methods that are particularly interesting:

  • Pool.apply
  • Pool.map
  • Pool.apply_async
  • Pool.map_async

The Pool.apply and Pool.map methods are basically equivalents to Python’s in-built apply andmap functions.

Before we come to the async variants of the Pool methods, let us take a look at a simple example using Pool.apply and Pool.map. Here, we will set the number of processes to 4, which means that the Pool class will only allow 4 processes running at the same time.
This time, we will use a simple cube-function that returns the cube of an input number.

def cube(x):
    return x**3
pool = mp.Pool(processes=4)
results = [pool.apply(cube, args=(x,)) for x in range(1,7)]
print(results)
[1, 8, 27, 64, 125, 216]
pool = mp.Pool(processes=4)
results = pool.map(cube, range(1,7))
print(results)
[1, 8, 27, 64, 125, 216]

The Pool.map and Pool.apply will lock the main program until a process has finished, which is quite useful if we want to obtain results in a particular order for certain applications.
In contrast, the async variants will submit all processes at once and retrieve the results as soon as they are finished. For example, the resulting sequence of cubes could also have been [8, 1, 64, 27, 125, 216] if the cube processes finished in a different order.

One more difference is that we need to use the get method after the apply_async() call in order to obtain the return values of the finished processes.

pool = mp.Pool(processes=4)
results = [pool.apply_async(cube, args=(x,)) for x in range(1,7)]
output = [p.get() for p in results]
print(output)
[1, 8, 27, 64, 125, 216]

Kernel density estimation as benchmarking function

In the following section, I want to do a simple comparison of a serial vs. multiprocessing approach where I will use a slightly more complex function than the cube example that we have been using above.

Here, I define a function for performing a Kernel density estimation for probability density functions using the Parzen-window technique.
I don’t want to go into much detail about the theory of this technique, since we are mostly interested to see how multiprocessing can be used for performance improvements, but you are welcome to read my more detailed article about the Parzen-window method here.

import numpy as np

def parzen_estimation(x_samples, point_x, h):
    """
    Implementation of a hypercube kernel for Parzen-window estimation.

    Keyword arguments:
        x_sample:training sample, 'd x 1'-dimensional numpy array
        x: point x for density estimation, 'd x 1'-dimensional numpy array
        h: window width

    Returns the predicted pdf as float.

    """
    k_n = 0
    for row in x_samples:
        x_i = (point_x - row[:,np.newaxis]) / (h)
        for row in x_i:
            if np.abs(row) > (1/2):
                break
        else: # "completion-else"*
            k_n += 1
    return (k_n / len(x_samples)) / (h**point_x.shape[1])

A quick note about the “completion else""In the past, I received some comments about whether I used this for-else combination intentionally or if it happened by mistake. This is a legitimate question, since this “completion-else" is rarely used (that’s what I call it, I am not aware of a more “official" for the else in this context; if there is one, please let me know).
I have a more detailed explanation here in one of my blog-posts, but in a nutshell: In contrast to a conditional else (in combination with if-statements), the “completion else" is only executed if the preceding code block (here the for-loop) has finished.


The Parzen-window method in a nutshell

So, what does this Parzen-window function do? The answer in a nutshell: It counts points in a defined region (the so-called window), and divides the number of those points inside by the number of total points to estimate the probability of a single point being in a certain region.

Below is a simple example where our Parzen-window is represented by a hypercube centered at the origin, and we want to get an estimate of the probability for a point being in the center of the plot based on this hypercube.

from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
from itertools import product, combinations
fig = plt.figure(figsize=(7,7))
ax = fig.gca(projection='3d')
ax.set_aspect("equal")

# Plot Points

# samples within the cube
X_inside = np.array([[0,0,0],[0.2,0.2,0.2],[0.1, -0.1, -0.3]])

X_outside = np.array([[-1.2,0.3,-0.3],[0.8,-0.82,-0.9],[1, 0.6, -0.7],
                  [0.8,0.7,0.2],[0.7,-0.8,-0.45],[-0.3, 0.6, 0.9],
                  [0.7,-0.6,-0.8]])

for row in X_inside:
    ax.scatter(row[0], row[1], row[2], color="r", s=50, marker='^')

for row in X_outside:
    ax.scatter(row[0], row[1], row[2], color="k", s=50)

# Plot Cube
h = [-0.5, 0.5]
for s, e in combinations(np.array(list(product(h,h,h))), 2):
    if np.sum(np.abs(s-e)) == h[1]-h[0]:
        ax.plot3D(*zip(s,e), color="g")

ax.set_xlim(-1.5, 1.5)
ax.set_ylim(-1.5, 1.5)
ax.set_zlim(-1.5, 1.5)

plt.show()

point_x = np.array([[0],[0],[0]])
X_all = np.vstack((X_inside,X_outside))

print('p(x) =', parzen_estimation(X_all, point_x, h=1))
p(x) = 0.3

Sample data and timeit benchmarks

In the section below, we will create a random dataset from a bivariate Gaussian distribution with a mean vector centered at the origin and an identity matrix as covariance matrix.

import numpy as np

np.random.seed(123)

# Generate random 2D-patterns
mu_vec = np.array([0,0])
cov_mat = np.array([[1,0],[0,1]])
x_2Dgauss = np.random.multivariate_normal(mu_vec, cov_mat, 10000)

The expected probability of a point at the center of the distribution is ~ 0.15915 as we can see below.
And our goal here is to use the Parzen-window approach to predict this density based on the sample data set that we have created above.

In order to make a “good" prediction via the Parzen-window technique, it is – among other things – crucial to select an appropriate window width. Here, we will use multiple processes to predict the density at the center of the bivariate Gaussian distribution using different window widths.

from scipy.stats import multivariate_normal
var = multivariate_normal(mean=[0,0], cov=[[1,0],[0,1]])
print('actual probability density:', var.pdf([0,0]))
actual probability density: 0.159154943092

Benchmarking functions

Below, we will set up benchmarking functions for our serial and multiprocessing approach that we can pass to our timeit benchmark function.
We will be using the Pool.apply_async function to take advantage of firing up processes simultaneously: Here, we don’t care about the order in which the results for the different window widths are computed, we just need to associate each result with the input window width.
Thus we add a little tweak to our Parzen-density-estimation function by returning a tuple of 2 values: window width and the estimated density, which will allow us to to sort our list of results later.

def parzen_estimation(x_samples, point_x, h):
    k_n = 0
    for row in x_samples:
        x_i = (point_x - row[:,np.newaxis]) / (h)
        for row in x_i:
            if np.abs(row) > (1/2):
                break
        else: # "completion-else"*
            k_n += 1
    return (h, (k_n / len(x_samples)) / (h**point_x.shape[1]))
def serial(samples, x, widths):
    return [parzen_estimation(samples, x, w) for w in widths]

def multiprocess(processes, samples, x, widths):
    pool = mp.Pool(processes=processes)
    results = [pool.apply_async(parzen_estimation, args=(samples, x, w)) for w in widths]
    results = [p.get() for p in results]
    results.sort() # to sort the results by input window width
    return results

Just to get an idea what the results would look like (i.e., the predicted densities for different window widths):

widths = np.arange(0.1, 1.3, 0.1)
point_x = np.array([[0],[0]])
results = []

results = multiprocess(4, x_2Dgauss, point_x, widths)

for r in results:
    print('h = %s, p(x) = %s' %(r[0], r[1]))
h = 0.1, p(x) = 0.016
h = 0.2, p(x) = 0.0305
h = 0.3, p(x) = 0.045
h = 0.4, p(x) = 0.06175
h = 0.5, p(x) = 0.078
h = 0.6, p(x) = 0.0911666666667
h = 0.7, p(x) = 0.106
h = 0.8, p(x) = 0.117375
h = 0.9, p(x) = 0.132666666667
h = 1.0, p(x) = 0.1445
h = 1.1, p(x) = 0.157090909091
h = 1.2, p(x) = 0.1685

Based on the results, we can say that the best window-width would be h=1.1, since the estimated result is close to the actual result ~0.15915.
Thus, for the benchmark, let us create 100 evenly spaced window width in the range of 1.0 to 1.2.

widths = np.linspace(1.0, 1.2 , 100)
import timeit

mu_vec = np.array([0,0])
cov_mat = np.array([[1,0],[0,1]])
n = 10000

x_2Dgauss = np.random.multivariate_normal(mu_vec, cov_mat, n)

benchmarks = []

benchmarks.append(timeit.Timer('serial(x_2Dgauss, point_x, widths)',
        'from __main__ import serial, x_2Dgauss, point_x, widths').timeit(number=1))

benchmarks.append(timeit.Timer('multiprocess(2, x_2Dgauss, point_x, widths)',
        'from __main__ import multiprocess, x_2Dgauss, point_x, widths').timeit(number=1))

benchmarks.append(timeit.Timer('multiprocess(3, x_2Dgauss, point_x, widths)',
        'from __main__ import multiprocess, x_2Dgauss, point_x, widths').timeit(number=1))

benchmarks.append(timeit.Timer('multiprocess(4, x_2Dgauss, point_x, widths)',
        'from __main__ import multiprocess, x_2Dgauss, point_x, widths').timeit(number=1))

benchmarks.append(timeit.Timer('multiprocess(6, x_2Dgauss, point_x, widths)',
        'from __main__ import multiprocess, x_2Dgauss, point_x, widths').timeit(number=1))

Preparing the plotting of the results

import platform

def print_sysinfo():

    print('\nPython version  :', platform.python_version())
    print('compiler        :', platform.python_compiler())

    print('\nsystem     :', platform.system())
    print('release    :', platform.release())
    print('machine    :', platform.machine())
    print('processor  :', platform.processor())
    print('CPU count  :', mp.cpu_count())
    print('interpreter:', platform.architecture()[0])
    print('\n\n')
from matplotlib import pyplot as plt
import numpy as np

def plot_results():
    bar_labels = ['serial', '2', '3', '4', '6']

    fig = plt.figure(figsize=(10,8))

    # plot bars
    y_pos = np.arange(len(benchmarks))
    plt.yticks(y_pos, bar_labels, fontsize=16)
    bars = plt.barh(y_pos, benchmarks,
         align='center', alpha=0.4, color='g')

    # annotation and labels

    for ba,be in zip(bars, benchmarks):
        plt.text(ba.get_width() + 1.4, ba.get_y() + ba.get_height()/2,
            '{0:.2%}'.format(benchmarks[0]/be),
            ha='center', va='bottom', fontsize=11)

    plt.xlabel('time in seconds for n=%s' %n, fontsize=14)
    plt.ylabel('number of processes', fontsize=14)
    t = plt.title('Serial vs. Multiprocessing via Parzen-window estimation', fontsize=18)
    plt.ylim([-1,len(benchmarks)+0.5])
    plt.xlim([0,max(benchmarks)*1.1])
    plt.vlines(benchmarks[0], -1, len(benchmarks)+0.5, linestyles='dashed')
    plt.grid()

    plt.show()

Results

plot_results()
print_sysinfo()

Python version  : 3.4.1
compiler        : GCC 4.2.1 (Apple Inc. build 5577)

system     : Darwin
release    : 13.2.0
machine    : x86_64
processor  : i386
CPU count  : 4
interpreter: 64bit

Conclusion

We can see that we could speed up the density estimations for our Parzen-window function if we submitted them in parallel. However, on my particular machine, the submission of 6 parallel 6 processes doesn’t lead to a further performance improvement, which makes sense for a 4-core CPU.
We also notice that there was a significant performance increase when we were using 3 instead of only 2 processes in parallel. However, the performance increase was less significant when we moved up to 4 parallel processes, respectively.
This can be attributed to the fact that in this case, the CPU consists of only 4 cores, and system processes, such as the operating system, are also running in the background. Thus, the fourth core simply does not have enough capacity left to further increase the performance of the fourth process to a large extend. And we also have to keep in mind that every additional process comes with an additional overhead for inter-process communication.

Also, an improvement due to parallel processing only makes sense if our tasks are “CPU-bound" where the majority of the task is spent in the CPU in contrast to I/O bound tasks, i.e., tasks that are processing data from a disk.

https://accounts.google.com/o/oauth2/postmessageRelay?parent=http%3A%2F%2Fsebastianraschka.com#rpctoken=914188318&forcesecure=1

kaggle airbnb

Data Files

File Name Available Formats
countries.csv .zip (546 b)
age_gender_bkts.csv .zip (2.46 kb)
test_users.csv .zip (1.05 mb)
sessions.csv .zip (59.14 mb)
sample_submission_NDF.csv .zip (478.27 kb)
train_users_2.csv .zip (4.07 mb)

In this challenge, you are given a list of users along with their demographics, web session records, and some summary statistics. You are asked to predict which country a new user’s first booking destination will be. All the users in this dataset are from the USA.

There are 12 possible outcomes of the destination country: ‘US’, ‘FR’, ‘CA’, ‘GB’, ‘ES’, ‘IT’, ‘PT’, ‘NL’,’DE’, ‘AU’, ‘NDF’ (no destination found), and ‘other’. Please note that ‘NDF’ is different from ‘other’ because ‘other’ means there was a booking, but is to a country not included in the list, while ‘NDF’ means there wasn’t a booking.

The training and test sets are split by dates. In the test set, you will predict all the new users with first activities after 7/1/2014 (note: this is updated on 12/5/15 when the competition restarted). In the sessions dataset, the data only dates back to 1/1/2014, while the users dataset dates back to 2010.

File descriptions

  • train_users.csv – the training set of users
  • test_users.csv – the test set of users
    • id: user id
    • date_account_created: the date of account creation
    • timestamp_first_active: timestamp of the first activity, note that it can be earlier than date_account_created or date_first_booking because a user can search before signing up
    • date_first_booking: date of first booking
    • gender
    • age
    • signup_method
    • signup_flow: the page a user came to signup up from
    • language: international language preference
    • affiliate_channel: what kind of paid marketing
    • affiliate_provider: where the marketing is e.g. google, craigslist, other
    • first_affiliate_tracked: whats the first marketing the user interacted with before the signing up
    • signup_app
    • first_device_type
    • first_browser
    • country_destination: this is the target variable you are to predict
  • sessions.csv – web sessions log for users
    • user_id: to be joined with the column ‘id’ in users table
    • action
    • action_type
    • action_detail
    • device_type
    • secs_elapsed
  • countries.csv – summary statistics of destination countries in this dataset and their locations
  • age_gender_bkts.csv – summary statistics of users’ age group, gender, country of destination
  • sample_submission.csv – correct format for submitting your predictions

Rossmann Store Sales, Winner’s Interview: 1st place, Gert Jacobusse

Rossmann operates over 3,000 drug stores in 7 European countries. In their first Kaggle competition, Rossmann Store Sales, this drug store giant challenged Kagglers to forecast 6 weeks of daily sales for 1,115 stores located across Germany. The competition attracted 3,738 data scientists, making it our second most popular competition by participants ever.

Gert Jacobusse, a professional sales forecast consultant, finished in first place using an ensemble of over 20 XGBoost models. Notably, most of the models individually achieve a very competitive (top 3 leaderboard) score. In this blog, Gert shares some of the tricks he’s learned for sales forecasting, as well as wisdom on the why and how of using hold out sets when competing.

The Basics

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

My hobby and daily job is to work on data analysis problems, and I participate in a lot of Kaggle competitions. With my own company Rogatio I deliver tailored sales forecasts for several companies – product specific as well as overall. Therefore I knew how to approach the problem.

Gert's profile on Kaggle

How did you get started competing on Kaggle?

I don’t remember, somehow it has become a part of my life. I enjoy the competitions so much that it is really addictive for me. But in a good way: it is nice exposure for my skills, I learn a lot of new techniques and applications, I get to know other skilled data scientists and if I am lucky I even get paid!

What made you decide to enter this competition?

A sales forecast is a tool that can help almost any company I can think of. Many companies rely on human forecasts that are not of a constant quality. Other companies use a standard tool that is not flexible enough to suit their needs. As an individual researcher I can create a solution that really improves business. And that is exactly what this competition is about. I am very eager to further develop and show my skills – therefore I did not hesitate a moment to enter this competition.

Let’s Get Technical

What preprocessing and supervised learning methods did you use?

The most important preprocessing was the calculation of averages over different time windows. For each day in the sales history, I calculated averages over the last quarter, last half year, last year and last 2 years. Those averages were split out by important features like day of week and promotions. Second, some time indicators were important: not only month and day of year, but also relative indicators like number of days since the summer holidays started. Like most teams, I used extreme gradient boosting (xgboost) as a learning method.

-Figure 1 a/b. Illustration of the task: predict sales six weeks ahead, based on historical sales (only last 3 months of train set shown).

What was your most important insight into the data?

The most important insight was that I could reliably predict performance improvements based on a hold out set within the trainset. Because of this insight, I did not overfit the public test set, so my model worked very well on the public test set as well as the unseen private test set that was four weeks further ahead.

Do you always use hold out sets to validate your model in every competition?

Yes, sometimes using cross-validation (with multiple holdout sets) and sometimes with a single holdout set, like I did in this competition. The advantage of a holdout set is that I can use the public test set as a real test set, not a set that gives me feedback to improve my model. As a consequence, I get reliable feedback about how much I overfitted my own holdout set. Therefore, I do not like competitions where the train/ test split is not-random, while the public/ private split is random: in such competitions, you can build a better model by using feedback from the public leaderboard. I do not like that because I am not aware of any real life problem that would require such an approach. This competition was ideal for me: the train test split was time based, and so was the public/private split!

Do you have any recommendations for selecting data for a hold out set and using it most effectively?

For selecting a hold out set, I always try to imitate the way that the train and test set were split. So, if it is a time split, I split my holdout sample time based; if it is a geographical split by city, I split my holdout set by city; and if it is a random split, then my holdout split will be random as well. You can effectively use a holdout set to push the limit towards how much you can learn from the data without overfitting. Don’t be afraid to overfit your holdout set, the public leaderboard will tell you if you do so.

Were you surprised by any of your findings?

Yes, I was surprised that a model without the most recent month of data (that I used to predict sales further ahead) did almost as well as a model that did include recent data. This finding is very specific for the Rossmann data, and it means that short term changes are less important than they often are in forecasting.

rossmann1_fig22

Which tools did you use?

For preprocessing I loaded the data into an SQL database. For creating features and applying models, I used Python.

How did you spend your time on this competition?

I spent 50% on feature engineering, 40% on feature selection plus model ensembling, and less than 10% on model selection and tuning.

What was the run time for both training and prediction of your winning solution?

The winning solution consists of over 20 xgboost models that each need about two hours to train when running three models in parallel on my laptop. So I think it could be done within 24 hours. Most of the models individually achieve a very competitive (top 3 leaderboard) score.

Figure 3. A time indicator for the time until store refurbishment (last four days on the right of the plot) reveals how the sales are expected to change during the weeks before a refurbishment.

Words of Wisdom

What have you taken away from this competition?

More experience in sales forecasts and a very solid proof of my skills. Plus a nice extra turnover of $15,000 dollars that I had not forecasted.

Do you have any advice for those just getting started in data science?

  1. make sure that you understand the principles of cross validation, overfitting and leakage
  2. spend your time on feature engineering instead of model tuning
  3. visualize your data every now and then

Just for Fun

If you could run a Kaggle competition, what problem would you want to pose to other Kagglers?

You have proven to be very good at creating competitions, I don’t have an idea to improve on that right now 😉 But I have the opportunity so let me share one idea for improvement: to create good models and anticipate the kind of error that can be expected, I often miss explicit information on how the train/test and public/private sets are being split. A competition is (even) more fun for me when I don’t have to guess at what types of mechanisms impact model performance.

What is your dream job?

Work for a variety of customers – and help them with data challenges that are central to the success of their business. And have enough spare time to participate in Kaggle competitions!

linux command

Copy or move file

sudo cp -r <source> <destinate>

Ex:  cp -r home/ayu/xyz/. /tmp

sudo mv -r <source> <destinate>

Ex: mv drupal_commons/ xyz/

Rename file

mv <old file name> <new file name>

 

File Permission

chmod

The chmod command is used to change the permissions of a file or directory. To use it, you specify the desired permission settings and the file or files that you wish to modify. There are two ways to specify the permissions, but I am only going to teach one way.

It is easy to think of the permission settings as a series of bits (which is how the computer thinks about them). Here’s how it works:

rwx rwx rwx = 111 111 111
rw- rw- rw- = 110 110 110
rwx --- --- = 111 000 000

and so on...

rwx = 111 in binary = 7
rw- = 110 in binary = 6
r-x = 101 in binary = 5
r-- = 100 in binary = 4

Ex: sudo chmod 600 some_file

Here is a table of numbers that covers all the common settings. The ones beginning with “7″ are used with programs (since they enable execution) and the rest are for other kinds of files.

Value Meaning
777 (rwxrwxrwx) No restrictions on permissions. Anybody may do anything. Generally not a desirable setting.
755 (rwxr-xr-x) The file’s owner may read, write, and execute the file. All others may read and execute the file. This setting is common for programs that are used by all users.
700 (rwx——) The file’s owner may read, write, and execute the file. Nobody else has any rights. This setting is useful for programs that only the owner may use and must be kept private from others.
666 (rw-rw-rw-) All users may read and write the file.
644 (rw-r–r–) The owner may read and write a file, while all others may only read the file. A common setting for data files that everybody may read, but only the owner may change.
600 (rw——-) The owner may read and write a file. All others have no rights. A common setting for data files that the owner wants to keep private.