# Older blog entries for sness (starting at number 5698)

LIBSVM FAQ

LIBSVM FAQ: "Q: Does libsvm have special treatments for linear SVM?
No, libsvm solves linear/nonlinear SVMs by the same way. Some tricks may save training/testing time if the linear kernel is used, so libsvm is NOT particularly efficient for linear SVM, especially when C is large and the number of data is much larger than the number of attributes. You can either

Use small C only. We have shown in the following paper that after C is larger than a certain threshold, the decision function is the same.
S. S. Keerthi and C.-J. Lin. Asymptotic behaviors of support vector machines with Gaussian kernel . Neural Computation, 15(2003), 1667-1689.

Check liblinear, which is designed for large-scale linear classification.
Please also see our SVM guide on the discussion of using RBF and linear kernels."

'via Blog this'

Syndicated 2013-07-13 04:44:00 from sness

Soft margin classification

Soft margin classification: "The optimization problem is then trading off how fat it can make the margin versus how many points have to be moved around to allow this margin. The margin can be less than 1 for a point by setting , but then one pays a penalty of in the minimization for having done that."

'via Blog this'

Syndicated 2013-07-13 04:38:00 from sness

Support vector machine - Wikipedia, the free encyclopedia

Support vector machine - Wikipedia, the free encyclopedia: ". To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function selected to suit the problem."

'via Blog this'

Syndicated 2013-07-13 04:37:00 from sness

opencv - How to speed up svm.predict? - Stack Overflow

opencv - How to speed up svm.predict? - Stack Overflow: "The prediction algorithm for an SVM takes O(nSV * f) time, where nSV is the number of support vectors and f is the number of features. The number of support vectors can be reduced by training with stronger regularization, i.e. by increasing the hyperparameter C (possibly at a cost in predictive accuracy)."

'via Blog this'

Syndicated 2013-07-12 17:44:00 from sness

QuerySet API reference | Django documentation | Django

QuerySet API reference | Django documentation | Django: "In some complex data-modeling situations, your models might contain a lot of fields, some of which could contain a lot of data (for example, text fields), or require expensive processing to convert them to Python objects. If you are using the results of a queryset in some situation where you know you donâ€™t need those particular fields, you can tell Django not to retrieve them from the database."

'via Blog this'

Syndicated 2013-07-12 04:47:00 from sness

Support Vector Machines: Parameters

Support Vector Machines: Parameters: ""However, it is critical here, as in any regularization scheme, that a proper value is chosen for C, the penalty factor. If it is too large, we have a high penalty for nonseparable points and we may store many support vectors and overfit. If it is too small, we may have underfitting."
Alpaydin (2004), page 224"

'via Blog this'

Syndicated 2013-07-11 21:17:00 from sness

Marbert Rocel - Small Hours / Daniel Stefanik Remix [Compost Black Label...