Tuesday, December 1, 2009

Sunday, August 30, 2009

Means and Proportions with two populations

Statistical inference about means and proportions with two populations seems to be one of the most commonly used applications in the field of analytics - comparing campaign response rates between 2 groups of customers, pre and post campaign sales, membership renewal rates, etc.

Call it chance or whatever, but whenever these kind of tasks came up I hear people talking about the t-tests only. No issues as long as you want to compare means or when your target variable is a continuous value. But how or why do people talk about the t-test when they want to compare ratios or proportions? Whatever happened to the Chi-Square tests or the Z-test for difference in proportions?

I did a bit of research on the net, a bit of calculation using pen and paper [very good exercise for the brain in this age of calculators and spreadsheets :-) ], read a very good article by Gerard E. Dallal, and I found the answers.

Going back to our introductory class in statistics, let’s check out the formulae for the t-tests.

1. Assuming that the population variances are equal,
T = (X1 – X2)/sqrt (Sp2(1/n1 + 1/n2) ..........Equation 1

where
X1, X2 = means of sample 1 and 2
n1, n2 = size of sample 1 and 2
Sp2 = pooled variance = [((n1-1)S12+(n2-1)S22)/(n1+n2-2)]

2. Assuming that the population variances are not equal,
T = (X1 – X2)/sqrt(S12/n1 + S22/n2) ..........Equation 2


We have also been taught that the test statistic Z is used to determine the difference between two population proportions based on the difference between the two sample proportions.

And the formula for the Z statistic is given by
Z = (P1 – P2)/ sqrt(P(1-P)(1/n1 + 1/n2)) ..........Equation 3

where
P1, P2 = proportions of success (or target category) in samples 1 and 2
S1, S2 = variances for samples 1 and 2
n1, n2 = size of samples 1 and 2
P = pooled estimate of the sample proportion of successes =(X1 + X2)/(n1 + n2)
X1, X2 = number of successes (or target category) in samples 1 and 2

The test statistic Z (equation 3) is equivalent to the chi- square goodness-of-fit test, also called a test of homogeneity of proportions.

But how different is the proportions from means? The proportion having the desired outcome is the number of individuals/observations with the outcome divided by total number of individuals/observations. Suppose we create a variable that equals 1 if the subject has the outcome and 0 if not. The proportion of individuals/observations with the outcome is the mean of this variable because the sum of these 0s and 1s is the number of individuals/observations with the outcome.

Let's suppose there are m 1s and (n-m) 0s among the n observations. Then, XMean (=P) = m/n and Xi - XMean is equal to (1-m/n) for m observations and 0-m/n for (n-m) observations. When these results are combined, the final result is

∑(Xi – XMean)2 = m(1-m/n)2 + (n – m) (0 – m/n)2
= m(1 – 2m/n + m2/n2) + (n – m) m2/n2
= m – 2(m2/n2) + (m3/n2) + (m2/n) – (m3/n2)
= m – (m2/n)
= m(1-m/n)
= nP(1-P)

So, variance = ∑(Xi – XMean)2/n = P(1-P)

Substituting this in the equation 3 (for Z statistic), we get
(P1 – P2)/ sqrt(Variance/n1 + Variance/n2)), which is not so different from equation 2 (the formula for the "equal variances not assumed" version of t test).

As long as the sample size is relatively large, the distributional assumptions are met, and the response is binomial – the t test and the z test will give p-values that are very close to one another.

And in the case where we have only two categories, the z test and the chi-square test turn out to be exactly equivalent, though the chi-square is by nature a two-tailed test. The chi-square distribution for 1 df is just the square of the z distribution.

The various tests and their assumptions as listed in Wikipedia are given below:
1. Two-sample pooled t-test, equal variances
(Normal populations or n1 + n2 > 40) and independent observations and σ1 = σ2 and (σ1 and σ2 unknown)

2. Two-sample unpooled t-test, unequal variances
(Normal populations or n1 + n2 > 40) and independent observations and σ1 ≠ σ2 and (σ1 and σ2 unknown)

3. Two-proportion z-test, equal variances
n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations

4. Two-proportion z-test, unequal variances
n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations

Sunday, May 31, 2009

Analytics: Reality and the Growing Interest

This is a guest post by Bhupendra Khanal, CEO of InRev Systems.

InRev Systems is a Bangalore based Decision Management Company, which works on Data Based Information Systems. Their interest areas are Marketing Services, Web Information, MIS Reporting, Social Media Services and Economic Research. Bhupendra also maintains a personal blog at Business Analytics.

Introduction
Huge amount of data is collected by any business houses today. There are also certain data collection agencies who have information like economic variables, demographic variables, police fraud list, loan default list, telephone and electricity bill payment history etc. All these data if analyzed, tend to separate people into various similar groups. These groups can be fraudulent group, defaulter group, risk averse and risk takers group, high income and low income groups, etc.

Based on this information, many business decisions can be made in a better and rational way. Analytics leverages almost the same concept but with assumptions:
· The behavior of people do not change with time
· People with similar profile behave similarly


Predictive Modeling and Segmentation are the major components of Analytics. The profile and the behavior of a set of people are taken, and a relation is found out. This same relation is used for building future profiles and predicting the behavior of people having the same profiles. This is commonly done using advanced statistical techniques like Regression Modeling (Linear, Logistic, and Poisson etc) and Neural Networks.

The Business Analytics Services Market comprises solutions for storing, analyzing, modeling, and delivering information in support of decision-making and reporting processes.

Analytics, regardless of its complexity, serve the same purpose – to assist in improving or standardizing decisions at all levels of an organization.


Size and Type of Market
The size of the Analytics market globally is estimated around $25 billion today. It is increasing very fast, doubling almost every five years for the last few decades, with $19 billion dollars in 2006. It is expected to grow to $31 billion by the end of 2011 (source - IDC 2007).

There are many areas for the implementation of Analytics. The most common Analytics practices are Risk Management, Marketing Analytics, Web Analytics, and Fraud Prediction etc. These functions are handled by different organizations in different ways, with most companies maintaining a fine balance with the in-house team, outsourcing partner and consulting project vendors. Such variation makes calculation of market size very difficult.

Risk Management is one of the largest component of the Analytics Industry today, and it is the pioneer component too. The market is huge in the US and Europe; Asia Pacific is coming up fast and it is yet to get full swing in China and South Asia, including Nepal.

Web Analytics (analytics of Web Data) has picked up very fast and it is increasing by more than 20% for the last few years, thanks to the revolution led by Amazon, Google and Yahoo. Web Analytics market is a late entrant but have already passed the one billion mark.

The biggest of all is the Marketing Analytics (MA) and Strategy Science component. This is really huge owing to the effort put by companies like Dunhumby, Axiom and others. MA is critical as it tends to compete directly with Marketing Research Firms and Strategy Consulting firms in the type of work it does. This makes it difficult to calculate the market size but it is worth billions of dollars.

Big Players in the Area
The size of the Analytics market is huge today but the industry is fragmented. None of the core Analytics and Decision Management Company has ever touch a billion dollar revenue mark.


Fair Isaac is the pioneer and the largest Decision Management Company with revenue of around 800 million dollars. The other core DM and Analytics companies are quite smaller. This has happened due to the aggressive moves by IT Services Companies and Information Bureaus to acquire Analytics companies.

Experian, Transunion and Equifax are three major bureaus in the US while there are others too - Innovis, Axiom, Teletrack, Lexis Nexis etc. Each of these bureaus has analytics services as their offerings. Apart from these the BI majors like SAS, SPSS, Salford Systems etc. offer Analytics Services.

The Indian Analytics Market is small but growing. This can be justified by the more than double salary hike rates in Analytics compared to the Software domain. The outsourcing shops and India-focused companies both have mushroomed in the last five years in India while the problem remains in getting proper talent and retaining them.

Major Banks like ICICI and HDFC have their strong in-house analytics unit while SBI has partnered with GE Money for Analytics support for their Cards portfolio. The smaller banks are yet to start.

Other majors in non banking industries are also using Analytics through Outsourcing or Consulting. Airtel and Reliance are leading the way for the use of Analytics in India in Telecom.

Scope on how it can grow: Indian Context
The Analytics Industry is growing fast. It has the scope of forming a separate process across industries like HR, Operations, and IT System. IT is now in the early stage of process metamorphosis, where each process start through consulting, grow through in-house establishments and finally settle down as outsourcing to third parties.

The future growth depends on the approach of major players and consolidation of the industry. All this will make it a high value and high growth industry, where players can provide high quality products and services while maintaining their profitability.

Another big challenge is the supply of quality man-power and training. Today India neither has good institutes training people on Analytics nor has it got an Infosys for Analytics (who can employ and train huge number of freshers). Even the number of good Statistics and Mathematics Institutes in India is less.

Amidst all these challenges, India is positioned fairly well in the world today, and it will be interesting to see if it can become a Knowledge Process and Analytics hub in the days ahead.

Monday, May 18, 2009

A Tale Of Two Banks and One Telecom Service Provider

I have bank accounts at ICICI & Citibank (India). I also use credit cards of these two companies. Let’s first talk about their Mobile Banking services. ICICI has the following options:

My account getting credited above
My account getting debited above
Salary credited to my account #
Cheque deposited in my account bounced
Account Balance above
Account Balance below
Debit Card Purchases above

Messages for the above can be received through SMS, Email or both.

Citibank calls it Alerts, and they offer the following options:

Withdrawal balance by account
Withdrawal balance by account
Time deposit maturity advice
Cheque Status
Cheque Bounce Alert
Time Deposit Redemption Notice
Cheque dishonor

These messages can be received through SMS, Email or Both depending on the alert type. Also, for some of the alerts, message frequencies can be chosen as Daily, Weekly, or Monthly.

At first glance, it seems that Citibank offers more options but a closer look will reveal ICICI has done more research and come up with a better offering. The Citibank alerts are based on how frequently you want them while ICICI’s alerts are based on particular (defined by the customer) credited and debited amounts. ICICI’s options make more sense as I don’t want alerts daily or weekly, but only when I have made a transaction.

And because of the lack of options, I continue to receive daily alerts on my Citibank account balance irrespective of the fact that the balance remains the same or I haven’t done any transaction for a month. Also, the system at Citibank looks like a typical CRM system while the one at ICICI looks more like a BI system.

Now, let’s discuss their Credit Cards and their Customer Analytics.

I have been using an ICICI Gold credit card for almost 3 years now. I used it to pay all my bills – electricity bill, mobile bill, internet bill, shopping bills….and I paid all dues in time. I applied for and got the Citibank Gold card about a year after I got my ICICI card. I used the Citibank Gold card for 2-3 purchases, paid all the dues in time, and Citibank increased my credit limit every time.

Encouraged by their response (I got very nice emails from their customer service) and actions (increase of credit limit), I started using the Citibank Gold card more frequently. In a few months’ time, I got a free-for-life Citibank Platinum card with all these attractive features and benefits. I even got an invitation to join a wine club though I’m more of a rum and whiskey guy. Don’t blame them though; getting your hands on such kind of consumer lifestyle/preferences data will be next to impossible in India.

I have now almost forgotten the ICICI Gold card; I use it very rarely these days. The credit limit given to me 3 years back still remains the same. And I have never received a single email or communication from ICICI. I also know that ICICI bank outsources its Customer Analytics to an Analytics Service Provider in Mumbai. So where is the up sell analytics? Doesn’t their data show the fact that I am “almost” leaving now? That again reveals the fact that customer spending information is not at all analyzed and they are not doing much about customer churn either.

On a different note, I am a post-paid mobile customer of India’s largest telecom service provider, Airtel
. Every month, whenever my bill is generated I receive about 6 SMSs from Airtel within the next 5-6 days. The messages can be summarized as:

1. Your bill has been generated…
2. You can view your bill at your online account…
3. Your bill has been emailed to
abc@gmail.com and the password to open it is… and if you haven’t received it… (this is actually 2 SMSs because of the length of the message)
4. Your bill amount is XXX…
5. Your bill amount is XXX and the last date of payment is…

For the last 3 years or so, I have been paying all my bills before the due date. Once I got so irritated that I emailed their customer service, and their reply? These are server generated messages and we can’t do anything about it. I got more irritated and asked my email to be forwarded to Airtel’s CRM, Business Intelligence, Analytics or whatever the team there like to call themselves!

I got just one SMS alert when my next month’s bill was generated. But it was back to square one from the 2nd month onwards.

My question is which “smart” manager came up with the idea that 6 SMSs should be sent to all their post-paid customers every month? Why doesn’t one SMS saying “Your bill amount of XXX has been generated and the due date is ABC” suffice? Has anyone at Airtel calculated the cost of sending 5-6 SMSs to all their post-paid customers, every month? Shouldn’t the last SMS be sent only to those customers who have a habit of making late payments? Do they send the second SMS to those customers who don’t have online accounts too? And why can’t these alerts be customized based on a customer’s usage and payment behavior?

I can give more examples of Indian companies in the retail, entertainment, and services sector that are not doing nothing or very little about all the customer data they have, inspite of mentioning or advertising that they use BI & Analytics. So how mature is the Business Intelligence, and CRM Analytics setup at Indian companies? And how skilled or knowledgeable are the senior people associated with it?

Tuesday, April 28, 2009

Workforce Analytics

When I was actively working in the Marketing Research domain, I designed and programmed a lot of surveys on employee satisfaction/morale/happiness for US companies. That was around 2004, I guess a lot has changed then.

I came across this article on Workforce Analytics by Becca Goren on the SAS website. It sounds very promising and it seems to be THE RIGHT THING TO DO. I have summarized the article and edited it a bit for my blog.

------

Most organizations today do not track who is critical, who will likely leave, or why they will leave, so there’s no opportunity to develop effective strategies to retain critical employees.

Workforce analytics is the missing link in today’s business strategy. It is imperative for organizations to know how to attract, grow and retain these employees, as well as sustain the already seasoned professionals that bring depth and value to the organization.

Everyone across an organization can play a role:
• Business managers need to identify pending skill gaps and a pipeline for tomorrow’s leaders.
• Finance managers need to determine costs related to vacancies, overtime, outsourcing, recruitment and loss of critical skills, and then model strategies to address these issues.
• HR needs to spot trends and develop strategies to support changing workforce demands while partnering with business and finance managers to determine the best organizational structure/restructuring to address change.

Five ways to optimize the organization through its work force

1. Align work force with business goals:
• Forecast the amount and types of talent required to execute business strategy.
• Gain full information needed to make decisions for tomorrow.
• Manage the work force to drive the organization to meet its goals.
• Identify specific talent gaps.

2. Address workforce demands at every stage of the talent life cycle:
• Acquisition: Match the right employee with the right skills at the right time at the right cost.
• Growth: Develop skills for today’s star performers and tomorrow’s leaders.
• Retention: Proactively respond to changing workforce demographics and trends.

3. Identify and mitigate risks:
• Analyze the past and look forward to spot trends in key factors related to voluntary termination, absences and other sources of risk.
• Determine the impacts of organizational change on employee performance.
• Predict where vacancies and leadership needs are likely to occur.
• Understand workforce supply-and-demand patterns, and create strategies with additional labor sources to meet that demand.

4. Plan for business change, such as mergers, acquisitions and downsizing:
• Model what-if scenarios of potential effects across divisions and geographies.
• Make strategic decisions to reduce the risk of losing good employees and keeping redundant or underperforming ones.

5. Synchronize financial and operational workforce strategies:
• Expand background for each employee to look beyond salaries and general workforce costs for a more granular understanding: absences, overtimes, training costs, headcount, salaries and other compensation.
• Develop a defensible position on how costs drive value for the organization.

But my biggest question is how many organizations actually put these into practice?

Thursday, March 19, 2009

Software Dependence & Model Accuracy

I work a lot with the Data Mining/Analytics business development team at my current company. My primary role is to be there during client presentations/conferences and answer the client’s queries on modeling techniques, and the USP of our approach related to model performance and/or business benefits.

During one of these interactions, we found out that a particular client is using THREE Data Mining softwares. Not statistical softwares or the base versions, but the complete, very expensive Data Mining softwares – SAS EM, SPSS Clementine and KXEN.

I was like, “Wow!!! But do you really need 3 Data Mining softwares???” Our initial questions and the client’s answers confirmed that inconsistent data formats was not the reason as the client already has a BI/DW system. Their reason? Well, they have the opinion that some algorithms/techniques in a particular DM software is much better and accurate than the same algorithms/techniques in another DM software.

I was, and I am, not convinced. Unless a particular DM software has a totally different and new algorithm for which you can’t obviously make a comparison, I haven’t come across or heard of any stark differences among model performances and results for the same algorithms offered by the reputed DM softwares. Data Mining solutions and the subsequent business benefits are not solely driven by model accuracy, a lot depends on how you interpret and apply the model’s results too.

What’s your opinion on this?


On a slightly different but related note, I learned of an interesting case from Rob Mattison’s webcast on Telco Churn Management available on the SAS website. He mentioned an incident where a client’s existing churn model was giving an impressive “above 90%” accuracy. Feeling something amiss, he went and talked with the Marketing people and found out that they were sending the same communication (sent at the time of acquisition) to the list of customers identified by the model as the most likely churners.

The result? The already unsatisfied customers who were thinking of switching got an inappropriate message/treatment, got further irritated and eventually left. In other words, all customers identified as likely churners by the model were encouraged to leave thereby shooting up the model accuracy!!!

If you have come across such cases, please share them with me in your comments:-)

Thursday, February 19, 2009

Two Step Cluster - Customer Segmentation in Telecom

I love Cluster Analysis because unlike a lot of other techniques, I don’t have to make any assumptions about the underlying distribution of the data. Though there are a few assumptions for best performance, it’s perfectly okay to cluster data that may not meet these assumptions. Only the business requirements/goals can determine whether the clusters/segments are useful or the solution is satisfactory.

Customer Segmentation is the process of splitting a customer database into distinct, meaningful, and homogenous groups based on specific parameters or attributes. At a macro level, the main objective for customer segmentation is to understand the customer base, monitor and understand changes over time, and to support critical strategies and functions such as CRM, Loyalty programs, and product development.

At a micro level, the goal is to support specific campaigns, commercial policies, cross-selling & up-selling activities, and analyze/manage churn & loyalty

SPSS has three different procedures that can be used to cluster data: hierarchical cluster analysis, k-means cluster, and two-step cluster. The two-step cluster is appropriate for large datasets or datasets that have a mixture of continuous and categorical variables. It requires only one pass of data (which is important for very large data files).

The first step - Formation of Preclusters
Preclusters are just clusters of the original cases that are used in place of the raw data to reduce the size of the matrix that contains distances between all possible pairs of cases. When preclustering is complete, all cases in the same precluster are treated as a single entity. The size of the distance matrix is no longer dependent on the number of cases but on the number of preclusters. These preclusters are then used in hierarchical clustering.


The second step - Hierarchical Clustering of Preclusters
In the second step, the standard hierarchical clustering algorithm is used on the preclusters.


The dataset I am going to use has information on 75 attributes for more than 70,000 customers. Product/service usage variables for all customers in the dataset are averages calculated over a period of four months.

In SPSS Clementine, the Data Audit available under the Output nodes palette gives the basic/descriptive statistics (mean, min, max...) and the quality (outliers, missing values...) of the variables.


Out of the 75 variables in the dataset, I used about 15 original variables and 3 new derived variables after considering their quality and business relevance. These selected variables were a combination of demographic, billing, and usage information.


The two-step cluster analysis produced 3 clusters. A very interesting difference was observed between Clusters 1 and 2.


Customers in Cluster 2 display the following characteristics:
- few of them are married
- few of them have children
- few of them have a credit card
- owns the most expensive mobile set

- maximum # of incoming & outgoing calls
- maximum # of roaming calls
- maximum MOU (minutes of usage)
- maximum # of active subscriptions
- maximum recurring charge (or, subscribes to the most expensive calling plan)
- maximum revenue

- maximum # of calls to customer care
- has the largest proportion of customers with low credit rating


Customers in Cluster 1 display characteristics that were exactly the opposite in ALMOST all of the areas mentioned above. So we have these customers who are married with children, posses a credit card, own a cheap mobile set, subscribe to the least expensive calling plan, make the minimum # of calls (incoming, outgoing, roaming & customer care), and has the highest credit rating.

Customers in Cluster 3 follow the middle path (in almost all the attributes) and offered no interesting or meaningful insights.

So what can be the business application of this exercise?
To put it simply, cluster analysis has thrown up two very distinct groups of customers – highly profitable but high risk customers in Cluster 2, and low profitable and low risk customers in Cluster 1.


For the highly profitable but high risk customers, one or more of the following actions can be implemented:
- Enhance credit risk monitoring
- Establish stringent usage thresholds
- Educate customers about alternative payment options, or make CC a mandatory payment method
- Migrate to pre-paid plans


For the low profitable and low risk customers, usage stimulation campaigns can be attempted with or without further segmentation.

This is one of the most basic examples of customer segmentation. If we consider traffic analysis information by taking ratios of certain call/service usage parameters, we can identify customer groups who have increased or decreased their usage. If we consider customer tenure, we can have an understanding of customer loyalty. Accordingly, specific actions can be taken for these groups.

Tuesday, February 3, 2009

The Stakeholders

According to the Encarta dictionary a stakeholder is a person or group with a direct interest, involvement, or investment in something.

The most important task faced by a Data Miner is to understand the client’s business background and arrive at the business and data mining objectives by asking the relevant, right questions to the right people. And the right people here are the so-called stakeholders; and identifying them makes the job half done!

According to Dorian Pyle, these stakeholders can be divided into five groups:

1. Need Stakeholders – People who actually experience the business problem regularly, in their work. In most situations, they have developed intuitive ideas about what is causing the problem, what is the solution, and how it should be applied. They often expressed their needs as an expected/desired solution, and not as a description of the problem.

2. Money Stakeholders – People who will commit the resources that allow the project to move forward. The business case document written to support modeling/the data mining project is mainly addressed to these people. It is usually not possible for this stakeholder to say “yes” to a project – that is the prerogative of the decision stakeholder - but they can easily say “no” if the numbers aren’t convincing.

3. Decision Stakeholders – People who make the decision of whether to execute the project. Someone very important but difficult to identify as this person is not directly involved with the data miner but relies instead on input from people who have interacted with the data miner.

4. Beneficiary Stakeholders – People who will get the benefit of the results of the data mining project/model; people who will be directly affected. They usually have the ability to promote the success or bring about the failure of many data mining projects.

5. Kudos Stakeholders – People who have sold the project internally. Credit for the project’s success will accrue to them, so will the negative impact of a less than successful project. Very important to understand from these people what it is that determines success, and how the project result will be evaluated.

Friday, January 9, 2009

Q & A with Eric Siegel, President of Prediction Impact

It's my pleasure to welcome Eric Siegel, President of Prediction Impact on datalligence. He has kindly answered some of my questions related to Data Mining.

Q1.
A brief intro about yourself and your DM experience
Eric: I've been in data mining for 16 years and commercially applying predictive analytics with Prediction Impact since 2003. As a professor at Columbia University, I taught the graduate course in predictive modeling (referred to as "machine learning" at universities), and have continued to lead training seminars in predictive analytics as part of my consulting career.

I'm also the program chair for Predictive Analytics World, coming to San Francisco Feb 18-19. This is the business-focused event for predictive analytics professionals, managers and commercial practitioners. This conference delivers case studies, expertise and resources in order to strengthen the business impact delivered by predictive analytics.


Q2. What are the most common mistakes you've encountered while working on DM projects?
Eric:
The main mistake is not following best practice organizational processes, as set forth by standards such as by CRISP-DM (mentioned in your Dec 18th blog on "Methodologies").

Predictive analytics' success hinges on deciding as an organization which specific customer behavior to predict. The decision must be guided not only by what is analytically feasible with the data available, but by which predictions will provide a positive business impact. This can be an elusive thing to pin down, requiring truly informed buy-in by various parties, including those who's operational activities will be changed by integrating predictive scores output by a model. The interactive process model defined by CRISP-DM and other standards ensures that you "plan backwards," starting from the end deployment goal, including the right personnel at key decision points throughout the project, and establishing realistic timelines and performance expectations

Dr. John Elder has a somewhat famous list of the top 10 common-but-deadly mistakes, which is an integral part of the workshop he's conducting at Predictive Analytics World, "The Best and the Worst of Predictive Analytics: Predictive Modeling Methods and Common Data Mining Mistakes". As he likes to say, "Best Practices by seeing their flip side: Worst Practices". For more information about the workshop, see The Best and the Worst of Predictive Analytics

Q3. Translating the Business Goal to a Data Mining Goal, and then defining the acceptable model performance/accuracy level for the success of the DM project appears to be one of the biggest challenges in a DM project. One approach is to use the typical accuracy level used in that particular domain. Another method is to model on a sample dataset (sort of a POC) to come up with an acceptable model performance/accuracy level for the entire dataset/project. Which approaches do you recommend/use to define the acceptable accuracy/cut-off level for a DM project?
Eric: Acceptable performance should be defined as the level where your company attains true business value. Establishing typical performance for a domain can be very tricky, since, even within one domain, each company is so unique - the context in which predictive models will be deployed is unique in the available data (which reflects unique customer lists and their responses or lack thereof to unique products) and in the operational systems and processes. Instead, forecast the ROI that will be attained in model deployment, based on both optimistic and conservative model performance levels. Then, if the conservative ROI looks healthy enough to move forward (or the optimistic ROI is exciting enough to take a risk), determine a minimal acceptable ROI and the corresponding model performance that would attain it as the target model performance level. This is then followed as the goal that must be attained in order to deploy the model, putting its predictive scores into play "in the field".

Q4. One thing I hear a lot from freshers entering the DM field is that they want to learn SAS. Considering the fact that SAS programming skills are highly respected and earn more than any other DM software skills, it's actually a futile exercise to convince these freshers that a tool-neutral DM knowledge is what they should actually strive for. What's your opinion on this?
Eric: Well, I think most people understand there are advantages to taking general driving lessons, rather than lessons that teach you only how to drive a Porsche. On the other hand, you can only sit in one car at a time, and when you learn how to drive your first car, most of what you learn applies in general, for other cars as well. All cars have steering wheels and accelerators; many predictive modeling tools share the same standard, non-proprietary core analytical methods developed at universities (decision trees, neural networks, etc.), and all of them help you prepare the data, evaluate model performance by viewing lift curves and such, and deploy the models.

Q5. According to you, what are the new areas/domains where DM is being applied?
Eric: I see human resource applications, including human capital retention, as an up-and-coming, and an interesting contrast to marketing applications: predict which employees will quit rather than the more standard prediction of which customer will defect.

I consider these the hottest areas (all represented by named case studies at PAW-09, by the way):

* Marketing and CRM (offline and online)
- Response modeling
- Customer retention with churn modeling
- Acquisition of high-value customers
- Direct marketing
- Database marketing
- Profiling and cloning
* Online marketing optimization
- Behavior-based advertising
- Email targeting
- Website content optimization
* Product recommendation systems (e.g., the Netflix Prize)
* Insurance pricing
* Credit scoring

Q6. In spite of the fact that a lot of companies in India provide Analytics or Data Mining as a service/solution to many companies around the world, there are no institutions/companies providing quality and industry focused Data Mining education. There are no colleges/universities offering Masters in Analytics/Data Mining in India. I have a lot of friends/colleagues who will gladly take up such courses/programs if they are made available in India. Can we expect this kind of courses/trainings from Prediction Impact, The Modeling Agency, TDWI, etc. in the near future?
Eric: I'm in on discussions several times a year about bringing a training seminar to other regions beyond North America and Europe, but it isn't clear when this will happen. For now, Prediction Impact does offer an online training program, "Predictive Analytics Applied" available on-demand at any time.