0% found this document useful (0 votes)
29 views11 pages

EX267 Red Hat Exam Practice Questions

This document provides a comprehensive set of practice questions for the EX267 exam, designed to mirror the actual exam's structure and topics. It includes topic-focused questions, accurate answer keys, and is intended for personal study only. The document also offers links to additional resources and platforms for further learning and practice.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views11 pages

EX267 Red Hat Exam Practice Questions

This document provides a comprehensive set of practice questions for the EX267 exam, designed to mirror the actual exam's structure and topics. It includes topic-focused questions, accurate answer keys, and is intended for personal study only. The document also offers links to additional resources and platforms for further learning and practice.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

This PDF contains a set of carefully selected practice questions for the

EX267 exam. These questions are designed to reflect the structure,


difficulty, and topics covered in the actual exam, helping you reinforce
your understanding and identify areas for improvement.

What's Inside:

1. Topic-focused questions based on the latest exam objectives


2. Accurate answer keys to support self-review
3. Designed to simulate the real test environment
4. Ideal for final review or daily practice

Important Note:

This material is for personal study purposes only. Please do not


redistribute or use for commercial purposes without permission.

For full access to the complete question bank and topic-wise explanations, visit:
CertQuestionsBank.com

Our YouTube: https://www.youtube.com/@CertQuestionsBank

FB page: https://www.facebook.com/certquestionsbank
Share some EX267 exam online questions below.
1.Which two are loss functions for classification tasks?
A. Mean Squared Error
B. Binary Cross-Entropy
C. Categorical Cross-Entropy
D. Huber Loss
Answer: B, C
Explanation:
Binary Cross-Entropy is used for binary classification, while Categorical Cross-Entropy handles multi-
class problems. Both measure the difference between predicted and actual class probabilities.

2.What OpenShift resource is commonly used to store data connection credentials?


A. ConfigMap
B. Secret
C. PersistentVolume
D. ServiceAccount
Answer: B
Explanation:
A Secret securely stores sensitive data like passwords, API keys, and credentials for data
connections.

3.Which metric is commonly used to evaluate a regression model?


A. Mean Squared Error (MSE)
B. Regression Model Accuracy
C. Precision for Regression Tasks
D. F1 Score for Regression Models
Answer: A
Explanation:
MSE measures the average squared difference between predicted and actual values, penalizing large
errors. It is a standard metric for regression model evaluation.

4.Which two status phases indicate a workbench pod is running correctly?


A. Pending
B. Ready
C. Running
D. Succeeded
Answer: B, C
Explanation:
The Running phase shows the pod is active, while Ready means the pod is fully operational and
serving requests.

5.What is the default namespace for DataScienceCluster objects?


A. openshift-ai
B. datascience
C. default
D. odh-system
Answer: A
Explanation:
openshift-ai is the default namespace where DataScienceCluster objects and associated Open Data
Hub components are deployed.

6.What does git remote -v display?


A. The commit history
B. The current branch name
C. The list of remote repositories and URLs
D. The status of the working directory
Answer: C
Explanation:
git remote -v lists the configured remote repositories and their URLs. It shows both fetch and push
URLs, helping you verify remote connections.

7.What resource is used to bind a role to a group in a project?


A. Role
B. GroupPolicy
C. ClusterRole
D. RoleBinding
Answer: D
Explanation:
A RoleBinding associates roles with users or groups within a specific project, enabling role-based
access control.

8.How do you modify an existing workbench in OpenShift AI?


A. oc patch notebook <name>
B. oc update notebook <name>
C. oc edit notebook <name>
D. oc modify notebook <name>
Answer: C
Explanation:
The oc edit notebook <name> command allows updating the configuration of an existing workbench
directly in the YAML definition.

9.How do you push a new Jupyter notebook to a remote repository?


A. git update
B. git upload
C. git sync main update
D. git push origin main
Answer: D
Explanation:
git push origin main pushes the local commits on the main branch to the remote repository, updating
it.

10.What two Operators are essential for managing DataScienceCluster objects?


A. GPU Operator
B. OpenShift AI Operator
C. DataScienceCluster Operator
D. Notebook Operator
Answer: B, C
Explanation:
The OpenShift AI Operator and DataScienceCluster Operator manage deployment, updates, and
lifecycle tasks for DataScienceCluster objects.

11.What happens when a PVC exceeds its allocated storage?


A. The pod crashes.
B. Data is written to temporary storage.
C. An error is logged.
D. The PVC automatically resizes.
Answer: C
Explanation:
If a PVC exceeds its allocated storage, an error is logged, and subsequent write operations may fail
due to insufficient space.

12.What is the purpose of ModelMesh in OpenShift AI?


A. Train models
B. Deploy models at scale
C. Visualize data
D. Debug Kubernetes clusters
Answer: B
Explanation:
ModelMesh efficiently deploys and manages multiple machine learning models at scale within
OpenShift AI environments.

13.What command checks the status of a ModelMesh deployment?


A. oc get services
B. kubectl describe modelmesh
C. kubectl get deployments
D. kubectl get pods
Answer: D
Explanation:
kubectl get pods shows the status of all pods, including those deployed by ModelMesh for model
serving.

14.Which function is used to evaluate a trained model in TensorFlow?


A. model.evaluate()
B. model.fit()
C. model.train()
D. model.compile()
Answer: A
Explanation:
The model.evaluate() function computes the loss and other metrics for a validation or test dataset,
providing insights into the model's generalization capabilities.
15.Which two libraries are used for visualizing training progress in real-time?
A. TensorBoard
B. Matplotlib
C. Pandas
D. Dask
Answer: A, B
Explanation:
TensorBoard provides interactive visualizations, such as loss curves, in real-time. Matplotlib
complements it by generating static plots for deeper analysis of training metrics.

16.Which two benefits does idle notebook culling provide?


A. Reduces resource consumption
B. Improves notebook performance
C. Prevents notebook deletion
D. Optimizes cluster capacity
Answer: A, D
Explanation:
Idle notebook culling reduces resource usage by stopping inactive notebooks, improving overall
cluster efficiency and capacity management.

17.What command checks if a user has permission to perform an action?


A. oc can-i
B. oc policy check
C. oc check-role
D. oc verify-permission
Answer: A
Explanation:
The oc can-i command verifies if a user is authorized to perform a specific action, helping assess
permissions quickly.

18.Which command initializes a new Git repository in a Jupyter notebook directory?


A. git init
B. git create repo
C. git start <repo-name>
D. git new <repo-name>
Answer: A
Explanation:
git init sets up an empty Git repository in the current directory by creating a .git folder. This command
begins tracking version control for your notebook project. It’s the first step when starting a new
project.

19.Which two commands display storage usage for a workbench?


A. oc describe pvc <name>
B. oc top pods
C. kubectl view storage
D. oc get pvc <name>
Answer: A, B
Explanation:
oc describe pvc <name> shows detailed storage info, while oc top pods displays real-time resource
usage for pods.

20.How do you delete a deployed InferenceService in KServe?


A. kubectl delete inferenceservice <name>
B. kubectl remove service <name>
C. oc remove runtime <name>
D. oc delete predictor <name>
Answer: A
Explanation:
kubectl delete inferenceservice <name> removes the specified InferenceService and its associated
resources from the cluster.

21.Which Python library is suitable for creating high-performance APIs for model deployment?
A. Flask
B. Matplotlib
C. NumPy
D. FastAPI
Answer: D
Explanation:
FastAPI is a modern, high-performance web framework for building APIs, providing features like
automatic documentation and fast request handling.

22.Which two commands provide the logs of a workbench pod?


A. oc logs <pod-name>
B. kubectl get logs <pod-name>
C. oc debug pod <pod-name>
D. oc describe pod <pod-name>
Answer: A, C
Explanation:
oc logs <pod-name> retrieves standard logs, while oc debug pod <pod-name> helps troubleshoot
with an interactive shell.

23.Which command exposes a Flask API running on port 5000 in a Docker container?
A. docker expose 5000
B. docker run -p 5000:5000
C. docker launch 5000
D. docker serve 5000
Answer: B
Explanation:
docker run -p 5000:5000 maps the container's port 5000 to the host's port 5000, allowing external
access to the Flask API.

24.What prerequisite must be met for Persistent Storage in OpenShift AI?


A. Only NFS storage is allowed
B. A StorageClass must be configured
C. Dynamic provisioning must be disabled
D. Static IP allocation is required
Answer: B
Explanation:
A StorageClass must be configured in OpenShift AI to dynamically provision Persistent Volumes,
allowing seamless storage management for AI workloads.

25.What is the primary role of a Kubeflow Pipeline?


A. Train deep learning models
B. Visualize model metrics
C. Automate end-to-end ML workflows
D. Deploy REST APIs
Answer: C
Explanation:
Kubeflow Pipelines automate end-to-end machine learning workflows, handling tasks from data
preprocessing to model evaluation.

26.Which file format is used to define RoleBindings in OpenShift AI?


A. JSON
B. INI
C. XML
D. YAML
Answer: D
Explanation:
RoleBindings in OpenShift AI are defined using YAML files, which provide a structured and human-
readable way to declare permissions.

27.Which two are unsupervised learning techniques?


A. Linear Regression
B. K-Means Clustering
C. Principal Component Analysis (PCA)
D. Decision Trees
Answer: B, C
Explanation:
Unsupervised learning techniques like K-Means Clustering and PCA find patterns in data without
labels.
Clustering groups similar data points, while PCA reduces dimensions for better visualization.

28.Which two tools help monitor and debug model deployments in Kubernetes?
A. Prometheus
B. Grafana
C. Flask
D. FastAPI
Answer: A, B
Explanation:
Prometheus collects metrics, and Grafana visualizes them, both essential for monitoring and
debugging model deployments in Kubernetes.
29.How do you check the status of a workbench?
A. oc check workbench <name>
B. oc status notebook <name>
C. oc describe notebook <name>
D. oc get pods -n openshift-ai
Answer: C
Explanation:
The oc describe notebook <name> command provides detailed information on the status and
configuration of the workbench.

30.What two commands create a new group in OpenShift AI?


A. oc add group <group-name>
B. oc create group <group-name>
C. adm groups new <group-name>
D. oc new-group <group-name>
Answer: B, C
Explanation:
The oc create group <group-name> and oc adm groups new <group-name> commands create a new
group in OpenShift AI, enabling organized user management.

31.Which function in scikit-learn is used to train a linear regression model?


A. LinearRegression.update()
B. LinearRegression.train()
C. LinearRegression.learn()
D. LinearRegression.fit()
Answer: D
Explanation:
The LinearRegression.fit() function in scikit-learn trains the linear regression model on input features
and target data, updating the model parameters accordingly.

32.What happens when a PVC bound to a workbench is deleted?


A. The PVC is immediately removed.
B. Data is retained in temporary storage.
C. The PVC is deleted after the pod terminates.
D. The pod restarts automatically.
Answer: C
Explanation:
The PVC is deleted only after the associated pod terminates, ensuring no data loss while the
workbench is still in use.

33.How do you stop a Jupyter notebook server?


A. Press Ctrl + S
B. Press Ctrl + C in the terminal
C. jupyter stop
D. oc delete notebook
Answer: B
Explanation:
Stopping a Jupyter notebook server requires pressing Ctrl + C in the terminal where the server is
running, gracefully shutting it down.

34.How do you delete a group in OpenShift AI?


A. oc delete group <group-name>
B. oc remove group <group-name>
C. oc erase group <group-name>
D. oc destroy group <group-name>
Answer: A
Explanation:
The oc delete group <group-name> command removes the specified group from the cluster, revoking
associated permissions and memberships.

35.What role grants full administrative permissions in OpenShift AI?


A. admin
B. view
C. edit
D. cluster-admin
Answer: D
Explanation:
The cluster-admin role provides full administrative privileges across the cluster, including the ability to
manage all OpenShift AI resources.

36.Which field in a custom KServe runtime YAML specifies the runtime container image?
A. spec.image
B. spec.runtime.image
C. spec.predictor.image
D. spec.container.image
Answer: C
Explanation:
The spec.predictor.image field in the KServe YAML specifies the Docker image used for the custom
runtime to handle inference tasks.

37.What is the purpose of a loss function in machine learning?


A. To visualize data
B. To update model weights
C. To measure prediction error
D. To encode categorical variables
Answer: C
Explanation:
A loss function measures the difference between the predicted and actual outputs. It guides the
model during training by indicating how far off predictions are. Minimizing the loss improves model
accuracy.

38.Which function in FastAPI is used to create a POST endpoint for model predictions?
A. @app.get()
B. @app.delete()
C. @app.put()
D. @app.post()
Answer: D
Explanation:
@app.post() defines a POST endpoint in FastAPI, commonly used for sending data to the server and
receiving model predictions.

39.What is the purpose of kustomize when deploying KServe?


A. To manage Docker images for services
B. To monitor and track model deployments
C. To scale inference services dynamically
D. To customize Kubernetes configurations
Answer: D
Explanation:
kustomize enables customization of Kubernetes resource definitions, simplifying the deployment of
KServe configurations.

40.What happens when you remove a user's permissions in a project?


A. The user retains access
B. The user's resources are deleted
C. The user loses access immediately
D. The project is locked
Answer: C
Explanation:
Removing a user's permissions instantly revokes their access, preventing further interaction with
project resources.

41.What does the status.conditions field indicate in a pod?


A. The resource limits of the pod
B. The image used for the pod
C. The current state and issues of the pod
D. The storage class of the pod
Answer: C
Explanation:
status.conditions provides details on the current state, readiness, and any issues affecting the pod’s
functionality.

42.Which parameter specifies the Open Data Hub version in a DataScienceCluster?


A. version
B. spec.release
C. spec.version
D. apiVersion
Answer: C
Explanation:
The spec.version field sets the specific version of Open Data Hub that the DataScienceCluster will
deploy and manage.
43.Which two programming languages are commonly supported in Jupyter notebooks?
A. Python
B. R
C. JavaScript
D. Go
Answer: A, B
Explanation:
Jupyter notebooks natively support Python and R kernels, widely used for data science, statistical
analysis, and visualization tasks.

44.Which two platforms can Elyra pipelines be executed on?


A. Kubeflow Pipelines
B. Apache Airflow
C. TensorBoard
D. Prometheus
Answer: A, B
Explanation:
Elyra pipelines can be executed on Kubeflow Pipelines and Apache Airflow, making it versatile for
different workflow backends.

45.What step is required before installing OpenShift AI on a cluster?


A. Install Red Hat Enterprise Linux CoreOS
B. Configure the OpenShift OperatorHub
C. Deploy a StatefulSet for AI workloads
D. Set up a VPN connection
Answer: B
Explanation:
Configuring the OpenShift OperatorHub is necessary to make the OpenShift AI Operator available in
the catalog, enabling easy installation and deployment.

Get EX267 exam dumps full version.

Powered by TCPDF (www.tcpdf.org)

You might also like