Hi,
I did some classification tasks with WEKA 3.4 using the DecisionTable classifier, and it worked perfectly without any error. Now, I'm having problems on the evaluation phase using WEKA 3.5.6 with DecisionTable. I get this error message:
Problem evaluating classifier: weka.classifiers.Evaluation
What can I do?
I suppose it's related to the classifier settings... I tried to change them, but I get always the same error message, which was not present using WEKA 3.4
Thank you
Carmelo

Hello all,
I used weka 3.5.6 to do regression on my data. I used SVMreg choosing
polyKernel, and got this:
SVMreg
weights (not support vectors):
- 0.3499 * (normalized) x
+ 0.5315
Can I just take this equation as the regression equation? I mean y= -
0.3499*(normailized)x + 0.5315 ? Would someone please tell me how the x is
normalized in weka?
Thank you.
--
Xue, Li
Bioinformatics and Computational Biology program @ ISU
Ames, IA 50010
515-450-7183

Can someone please post few examples on HMM with sample dataset or if thr's
any link to a tutorial
on HMM_WEKA.
On the web site by Marco Gillies, there is very little specifying how the
input arff/.csv file should be..
"The HMM classifiers only work on sequence data, which in Weka is
represented as a relational
attribute<http://weka.wikispaces.com/Multi-instance+classification>.
Data instances must have a single, Nominal, class
attribute and a single, relational, sequence attribute. The instances in
this relational attibute may either consist of single, nominal data
instances (in the case of discrete HMMs) or multivariate, numeric attributes
(in the case of gaussian HMMs)."
I am not able to get exactly how should I go forward with this.
Please help me out here..

Hello All,
I made some Weka video tutorials for my students. They are pretty sloppy (I
pause and search around a lot) but may be useful still.
They are located at
http://sentimentmining.net/weka/

Salam,
The number of support vectors arenot available in Libsvm package in Weka. Please help me how can take number of support vectors for different types SVM as one-class svm, C-SVC and so on.
Bestregards,
Maryam

In the following code I initialize my classifier as K2 but I cant find how
to set estimator in the code using setScoreType(SelectedTag)
can you provide me the details to specify it.
public static void main(String args[])
{
try {
Instances train =
DataSource.read("/home/encoder/thesis/Bayesian/data/Letor4.0/MQ2007/Fold1/train.arff");
Instances test =
DataSource.read("/home/encoder/thesis/Bayesian/data/Letor4.0/MQ2007/Fold1/test.arff");
train.setClassIndex(train.numAttributes() - 1);
test.setClassIndex(test.numAttributes() - 1);
BayesNet myBayes = new BayesNet();
K2 myK2= new K2();
myK2.setInitAsNaiveBayes(false);
myK2.setMaxNrOfParents(10);
myK2.setMarkovBlanketClassifier(true);
< myK2.setScoreType( ); <------------------ How to declare it?
>
myBayes.setSearchAlgorithm(myK2);
myBayes.buildClassifier(train);
......
Evaluation eval = new Evaluation(train);
eval.evaluateModel(myBayes, test);
p.println(eval.toSummaryString("\nResults\n\n", false));
Regards,
Parth.

Thanks Thomas for the reply. I was reading the paper, Instance-based
learning algorithm by Aha and Kibler (1991), that Weka IB1 and IBk
classifiers implement. My impression was that in the paper, IB1, IB2, and
IB3 refer to three different instance-based algorithms. My guess was
that I could specify the number of neighbors, k, for each of these algorithms.
In other words, the number in the names IB1, IB2, and IB3 in the paper
does not seem to correspond with the number of neighbors I choose but
denote the three variations. So, which algorithm could the Weka IBk
implementation be? Maybe I should look at the code long enough to figure
it out?
Thanks again.
Li
> you can specify the number of Nearest Neighbours, which choice exactly
> makes you use IB1, IB2 etc.
>
> 2009/5/13 Li Yang <lyshane(a)umich.edu>
>
>> Dear Weka experts,
>>
>> I was just wondering whether the IBk classifier implements the IB1, IB2, or
>> IB3 algorithm in Aha and Kibler's article, Instance-based learning algorithm
>> (1991).
>>
>> Thank you in advance for your help.
>>
>> Li
>>
>>
>> _______________________________________________
>> Wekalist mailing list
>> Send posts to: Wekalist(a)list.scms.waikato.ac.nz
>> List info and subscription status:
>> https://list.scms.waikato.ac.nz/mailman/listinfo/wekalist
>> List etiquette:
>> http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html<http://www.cs.waikato.ac.nz/%7Eml/weka/mailinglist_etiquette.html>
>>
>>
>
>
> --
> Departement of Knowledge Engeneering
> Faculty of Humanities & Science
> Maastricht University
>
> http://www.netstorm.be
>

Hi
I am using weka 3.6 API programatically for association rule mining using apriori. Essentially, I am doing market
basket analysis for an electronic store. I have 7 attributes as follows with values as either "Y" or "N", depending
on whether an item is present or not in a transaction.
lap_top
anti_virus_software
lash_drive
hdtv
connector_cable
tv_stand
cd_pack
An example instance is
"Y", "Y", "Y", "N","N", "N", "N"
My problem is that rules seem to be dominated by the negatively correlated attributes, which is understandable.
Here are the top 5 rules I got
1. hdtv=N tv_stand=N 9 ==> lap_top=Y 9 conf:(1)
2. lap_top=Y hdtv=N 9 ==> tv_stand=N 9 conf:(1)
3. anti_virus_software=Y connector_cable=N 8 ==> lap_top=Y 8 conf:(1)
4. anti_virus_software=Y tv_stand=N 8 ==> lap_top=Y 8 conf:(1)
5. hdtv=N connector_cable=N 8 ==> lap_top=Y 8 conf:(1)
I am more interested in rules based on positive correlation. In the result above, I am not interested in rules 1, 2 and 5.
How do I get rid of them. Will it help to replace "N" with missing value. How do I specify missing value when I create instances
using API.
I would appreciate any help
Thanks
Pranab Ghosh
Software Architect
_________________________________________________________________
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/201469227/direct/01/

Hi all.
I can see that weka.core.ContingencyTables class calculates various
statistics about a dataset (e.g. conditional entropy of row=feature given a
column=class).
Is there a class that would 'implement' this class that would just output
these statistics without running any classifier algorithm (e.g. J48 that
uses conditional entropies to calculate information gain)?
I am a Java rookie so for any help big or little I would be grateful!
Thanks, Harri S

Hi,
I'm reading the learned model into an
ObjectInputStream object and creating a Classifier
from it. Then I'm creating an Instance with a,b,c as
variables and setting y=0 which is the classifier
field.
Then I'm running
clsLabel = aClassifier.classifyInstance(anInstance);
If I use LinearRegression, this is working fine. But,
if I use a MultilayerPerceptron classifier, I'm
getting an error:
Exception in thread "main"
weka.core.UnassignedDatasetException : Instance
doesn't have access to a dataset!
Do I need to do something different for
MultilayerPerceptron ?
Can anyone tell me where I'm going wrong ?
Regards,
Jagadeesh.