使用xgBoost绘制要素重要性图

人气:97 发布:2023-01-03 标签: python machine-learning matplotlib xgboost feature-selection

问题描述

当我绘制要素重要性图时,我得到了这个杂乱的图。我有超过7000个变量。我知道内置函数只选择最重要的部分,尽管最终的图形不可读。 以下是完整的代码:

import numpy as np
import pandas as pd
df = pd.read_csv('ricerice.csv')
array=df.values
X = array[:,0:7803]
Y = array[:,7804]
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
seed=0
test_size=0.30
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=test_size, random_state=seed)
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X, Y)
import matplotlib.pyplot as plt
from matplotlib import pyplot
from xgboost import plot_importance
fig1=plt.gcf()
plot_importance(model)
plt.draw()
fig1.savefig('xgboost.png', figsize=(50, 40), dpi=1000)

虽然这个数字很大,但这个图表很难辨认。

推荐答案

有几点:

为了适应模型,您希望使用训练数据集(X_train, y_train),而不是整个数据集(X, y)。 您可以使用plot_importance()函数的max_num_features参数仅显示前max_num_features个功能(例如前10个)。

对您的代码进行上述修改后,使用一些随机生成的数据,代码和输出如下:

import numpy as np

# generate some random data for demonstration purpose, use your original dataset here
X = np.random.rand(1000,100)     # 1000 x 100 data
y = np.random.rand(1000).round() # 0, 1 labels

from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
seed=0
test_size=0.30
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size, random_state=seed)
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X_train, y_train)
import matplotlib.pylab as plt
from matplotlib import pyplot
from xgboost import plot_importance
plot_importance(model, max_num_features=10) # top 10 most important features
plt.show()

16