Agree. This is "Normalization of Data", it involves CHANGING values of input data, and can only be done AFTER feature selection -- i.e. this is feature engineering.
Based on this statement from the referenced article: Feature selection: The process of selecting the key subset of features to reduce the dimensionality of the training problem.
Feature selection is the answer since feature engineering is the creation of NEW features from raw data to capture additional information not easily apparent in the original feature set.
Good point and it would seem the most appropriate answer to this question would be Data Transformation. However, to my uneducated eye, that would still make the answer feature selection since you would select features from the transformed data.
I'm guessing the key here it to recall feature engineering is applied first to generate additional features, and then feature selection is done to eliminate irrelevant, redundant, or highly correlated features.
The answer is Feature engineering!
Feature engineering involves transforming raw data into features suitable for machine learning algorithms.
One crucial step is to ensure that numeric variables have a similar scale. This process, often called normalization or standardization, prevents features with larger scales from dominating the model's learning process.
e.g:
Imagine you have a dataset with two features: age (ranging from 18 to 65) and income (ranging from $10,000 to $200,000). Without scaling, the income feature might dominate the model's decision-making process, leading to biased results. By scaling both features to a similar range (e.g., 0 to 1), the model can treat both features equally.
Perfect explanation. From my understanding: ensuring that the numeric variables are on a similar scale, is data normalization, which is one type of feature engineering.
Feature Engineering = Creation of NEW features from the given raw data. E.g creation of a "seconds" column from a "days" column. Here we are both normalizing and creating a new feature called "seconds." Feature Engineering is the best answer here.
Meanwhile Feature selection is choosing features that have (direct & measurable) impacts on the predictions. E.g. Choosing "Driver's Age" & "Number of Years of Driving" & ignoring "Drivers Names" for predicting the likely hood of having a road accident
Feature Engineering = Creation of NEW features from the given raw data. E.g creation of a "seconds" column from a "days" column. Here we are both normalizing and creating a new feature called "seconds." Feature Engineering is the best answer here.
This is normalization. In the context of machine learning and data preprocessing, normalization is considered a feature engineering technique, not feature selection.
Normally feature engineering is applied first to generate additional features, and then feature selection is done to eliminate irrelevant, redundant, or highly correlated features.
Feature engineering, as it is hinted here: https://learn.microsoft.com/en-us/azure/machine-learning/concept-automated-ml?view=azureml-api-2#feature-engineering
Given answer is wrong.
Feature engineering involves transforming or manipulating the input features (variables) in a way that improves the performance or interpretability of a machine learning model. One common technique in feature engineering is scaling or normalizing numeric variables to ensure they have a similar scale or range.
When the numeric variables have different scales, it can negatively impact the performance of certain machine learning algorithms. For example, algorithms that are sensitive to the scale of variables, such as gradient descent-based optimization algorithms, may converge slowly or exhibit biased results if the variables have significantly different scales.
https://towardsdatascience.com/feature-engineering-for-numerical-data-e20167ec18
Ensuring that the numeric variables in training data are on a similar scale is an example of feature engineering.
Feature engineering involves transforming raw data into a format that is more suitable for modeling. It can involve a range of activities, such as dealing with missing values, creating new features from existing ones, encoding categorical variables, and normalizing numeric variables. Normalizing numeric variables, as you mentioned, is a common form of feature engineering to ensure all variables are on a similar scale. This can be particularly important for certain types of models, like neural networks and support vector machines, which can behave poorly if the features are not on similar scales.
Normalization is a **feature engineering** technique used to adjust the values of features in a dataset to a common scale. This is done to facilitate data analysis and modeling, and to reduce the impact of different scales on the accuracy of machine learning models ¹. Is there anything else you would like to know?
Source: Conversation with Bing, 4/24/2023
(1) Feature Engineering: Scaling, Normalization and Standardization. https://www.analyticsvidhya.com/blog/2020/04/feature-scaling-machine-learning-normalization-standardization/ Accessed 4/24/2023.
Feature Selection is the right answer: Normally feature engineering is applied first to generate additional features, and then feature selection is done to eliminate irrelevant, redundant, or highly correlated features. (https://learn.microsoft.com/en-us/azure/architecture/data-science-process/create-features)
No, correct answer is feature engineering. Ensuring that numeric variables are on similar scale involves normalization, which is a kind of feature engineering.
This section is not available anymore. Please use the main Exam Page.AI-900 Exam Questions
Log in to ExamTopics
Sign in:
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
koni_
Highly Voted 3 years, 6 months agomishumashu
2 years, 11 months agorfiuvcfns
Highly Voted 3 years, 6 months agoxiban
3 years, 6 months agoAyor
2 years, 10 months agorfiuvcfns
3 years, 5 months agoBobFar
Most Recent 9 months, 1 week agoMgb106
5 months, 1 week ago60ties
10 months, 2 weeks ago60ties
10 months, 2 weeks agostepkurniawan
1 year, 3 months agokd333200
1 year, 6 months agokatrang
1 year, 7 months agomcalif
1 year, 8 months agotakusui
1 year, 9 months agomciezak
1 year, 8 months agordemontis
1 year, 11 months agoleojadue
1 year, 11 months agomaster_yoda
2 years agoXtraWest
2 years agop1zz4
2 years, 2 months agoRomanhuki
2 years, 6 months agoAlex_W
10 months, 3 weeks agosriram72
2 years, 9 months ago