Showing posts with label Azure. Show all posts
Showing posts with label Azure. Show all posts

Secure authentication for Remote Desktop azure VDI

To automate Virtual Desktop Infrastructure (VDI) authentication on Azure, you can follow these steps:


1. Enable Single Sign-On (SSO) for Azure Virtual Desktop (AVD)


• Configure Azure AD Join: Ensure that the virtual machines are Azure AD joined or hybrid Azure AD joined.

• Use Conditional Access Policies: Enforce policies to allow seamless logins based on trusted devices and locations.

• Enable Seamless SSO with Windows Hello or Pass-through Authentication: Configure Azure AD Connect with pass-through authentication or federated authentication.


2. Configure Group Policy or Intune Policies for Automatic Login


• Use Group Policy Editor or Intune to deploy settings to end-user devices for automatic credential passing.

• Enable Automatic logon by setting the DefaultUsername and DefaultPassword in registry settings (if security policies permit).


3. Leverage Azure Key Vault for Secure Credential Storage


• Store sensitive credentials in Azure Key Vault.

• Use a script or Azure Function to retrieve credentials securely and pass them to the login process.


4. Use PowerShell for Scripted Login


• Automate login using a PowerShell script:


$username = "your_username"

$password = ConvertTo-SecureString "your_password" -AsPlainText -Force

$credential = New-Object System.Management.Automation.PSCredential ($username, $password)


Connect-AzAccount -Credential $credential



• Ensure this is executed securely, and passwords are not hard-coded where feasible.


5. Implement Azure AD Conditional Access with Passwordless Authentication


• Set up passwordless authentication methods like FIDO2 security keys, Microsoft Authenticator, or biometrics for your Azure Virtual Desktop users.


6. Leverage Third-party Tools or Custom Scripts


• Consider tools like Citrix Workspace or Horizon View to streamline authentication for VDIs integrated with Azure.

• Alternatively, write custom scripts using Azure SDK or APIs to handle VDI authentication in a secure and automated way.


Security Note:


Automating authentication involves sensitive data. Use secure practices like encryption, role-based access controls, and thorough testing before implementing in production.



From Blogger iPhone client

Apache flink adoption across different cloud

Apache Flink is widely adopted across major cloud platforms like AWS, Azure, Google Cloud Platform (GCP), and others due to its powerful stream-processing capabilities. Each cloud provider integrates Flink with their managed services and infrastructure to make it easier for businesses to deploy and scale real-time data applications. Here’s a breakdown of Flink adoption and integration across these cloud platforms:


1. AWS (Amazon Web Services)


Flink Services on AWS:

AWS offers native support for Flink through Amazon Kinesis Data Analytics for Apache Flink, a fully managed service for building Flink applications without the need to manage infrastructure.


Key Features on AWS:

• Amazon Kinesis Data Streams: For real-time data ingestion into Flink applications.

• Amazon S3: For storing snapshots and state data.

• Amazon DynamoDB and RDS: For using as data sinks or state backends.

• Elastic Kubernetes Service (EKS) and EMR: For deploying custom Flink clusters.

• CloudWatch: For monitoring Flink applications.


Use Case Examples:

• Real-time analytics on data streams (e.g., IoT sensor data).

• Fraud detection using Kinesis and Flink.


2. Microsoft Azure


Flink Services on Azure:

Azure supports Flink through integration with its data and analytics ecosystem. While there isn’t a fully managed Flink service like AWS, users can deploy Flink on Azure Kubernetes Service (AKS), Azure HDInsight, or virtual machines (VMs).


Key Features on Azure:

• Azure Event Hubs: For real-time data ingestion.

• Azure Data Lake Storage: For storing Flink state or outputs.

• Azure Synapse Analytics: For integrating processed data for analytics.

• Azure Monitor: For monitoring custom Flink deployments.


Deployment Options:

• Run Flink on AKS for high availability and scalability.

• Use Azure HDInsight with Kafka for integrated streaming pipelines.


Use Case Examples:

• Real-time event processing for telemetry data from IoT devices.

• Streaming analytics in Azure-based enterprise applications.


3. Google Cloud Platform (GCP)


Flink Services on GCP:

GCP provides support for Flink through Dataflow, its fully managed stream and batch processing service, which is compatible with Apache Flink via Apache Beam.


Key Features on GCP:

• Google Pub/Sub: For real-time data ingestion.

• BigQuery: As a data sink or for querying processed data.

• Cloud Storage: For storing state and checkpoints.

• Kubernetes Engine (GKE): For deploying custom Flink clusters.

• Cloud Monitoring: For monitoring Flink applications.


Use Case Examples:

• Real-time personalization and recommendations using Pub/Sub and Dataflow.

• Anomaly detection pipelines leveraging Flink and BigQuery.


4. Other Cloud Platforms


Alibaba Cloud:


• Flink is integrated into Alibaba Cloud’s Realtime Compute for Apache Flink, a fully managed service optimized for large-scale real-time processing.

• Use cases include e-commerce transaction monitoring and advertising analytics.


IBM Cloud:


• Flink can be deployed on IBM Cloud Kubernetes Service or virtual servers.

• Used for real-time processing with data pipelines integrated with IBM Event Streams.


OpenShift/Red Hat:


• Flink is supported in containerized environments like OpenShift, allowing enterprises to run Flink applications on private clouds or hybrid infrastructures.


General Deployment Patterns Across Clouds


1. Kubernetes:

• Flink is commonly deployed using Kubernetes (e.g., AWS EKS, Azure AKS, GCP GKE) for flexibility, scalability, and integration with containerized environments.

2. Managed Services:

• Platforms like AWS (Kinesis Data Analytics) and GCP (Dataflow) simplify deployment by offering managed Flink services.

3. Hybrid and On-Premises:

• Flink is often deployed on hybrid architectures (e.g., OpenShift) to handle sensitive data processing where public cloud isn’t feasible.


Summary


Flink’s integration with cloud-native tools makes it highly adaptable to various real-time and batch processing needs. AWS offers the most seamless Flink experience with its managed Kinesis Data Analytics service. GCP provides integration through Dataflow and Apache Beam, while Azure supports custom deployments with its event and data storage ecosystem. Other platforms like Alibaba Cloud and Red Hat OpenShift extend Flink’s reach into specific enterprise environments.


If you need help deploying Flink on any specific cloud platform, let me know!



From Blogger iPhone client

Azure DevOps capturing state change data of work items

import requests

from requests.auth import HTTPBasicAuth


# Configuration

organization = "your_organization"

project = "your_project"

work_item_id = "your_work_item_id"

personal_access_token = "your_personal_access_token"


# API endpoint for work item updates

url = f"https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/{work_item_id}/updates?api-version=7.0"


# Send GET request

response = requests.get(url, auth=HTTPBasicAuth('', personal_access_token))


if response.status_code == 200:

  updates = response.json()["value"]

  for update in updates:

    # Check if 'System.State' field is in the changed fields

    if "System.State" in update.get("fields", {}):

      print("Update ID:", update["id"])

      print("Modified By:", update["revisedBy"]["displayName"])

      print("Date:", update["revisedDate"])

      print("Old State:", update["fields"]["System.State"]["oldValue"])

      print("New State:", update["fields"]["System.State"]["newValue"])

      print("-------------------------------------------------")

else:

  print("Error:", response.status_code)

From Blogger iPhone client

Data Design Patterns

Data design patterns are solutions to recurring data modeling problems. They are reusable designs that can be applied to different data models.

Data design patterns can help you to improve the quality, efficiency, and scalability of your data models. They can also help you to avoid common data modeling problems.

There are many different data design patterns available. Some of the most common data design patterns include:

  • Active record: The active record pattern is a design pattern that decouples data access from business logic.
  • Data mapper: The data mapper pattern is a design pattern that separates the data access layer from the business logic layer.
  • Repository: The repository pattern is a design pattern that provides a central access point to data.
  • Value object: The value object pattern is a design pattern that encapsulates data that does not change.
  • Entity: The entity pattern is a design pattern that represents a real-world object in the data model.
  • Association: The association pattern is a design pattern that represents the relationship between two entities.
  • Aggregation: The aggregation pattern is a design pattern that represents a relationship between an entity and a collection of other entities.
  • Composition: The composition pattern is a design pattern that represents a relationship between an entity and another entity that is part of it.

The best data design pattern for you will depend on your specific needs and requirements. If you are not sure which pattern is right for you, I recommend that you consult with a data modeling expert.

Here are some of the factors to consider when choosing a data design pattern:

  • The size and complexity of the data: The larger and more complex the data, the more complex the data design pattern will need to be.
  • The performance requirements: The data design pattern should be chosen to meet the performance requirements of the application.
  • The maintainability requirements: The data design pattern should be chosen to make the data model easy to maintain.
  • The scalability requirements: The data design pattern should be chosen to make the data model scalable.
  • The security requirements: The data design pattern should be chosen to meet the security requirements of the application.

Once you have chosen a data design pattern, you need to implement it in your data model. The implementation of the data design pattern will depend on the specific pattern that you have chosen.

Increasing Disk Space - Centos

Sometimes you are running out of space on your system and need to increase disk size. Both AWS and Azure provide visual interface to do this task. However, you also need to adjust your Linux filesystem to account for those changes. Here is how you do it for CentOS 7 system

Step 1. Expand the modified partition using growpart

First, install the cloud-utils-growpart script:
yum install cloud-utils-growpart
Next, use it to grow the logical partition to extend to all the available space:
growpart /dev/sda 1

Step 2. Resize filesystem

Both AWS and Azure use XFS for filesystem. You might have already tried:
resize2fs /dev/sda1 will issue:
And received this dreaded message because resize2fs does not account for XFS:
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/sda1
Couldn't find valid filesystem superblock. 
You should do something different instead. First mount the root partition once again at /mnt:
mount /dev/sda1 /mnt
Now you can complete the resizing with:
xfs_growfs -d /mnt

ISE Module Browser requires NuGet-anycpu.exe, but fails to install

Setting up PowerShell for Azure portal is always a challenge.

Make sure following path exists:
C:\Program Files\WindowsPowerShell\Modules\ISEModuleBrowserAddon\1.0.1.0\

If not download the PowerShell Module through the ISE. 


To enable Module Browser for the 64-bit version of PowerShell ISE:
  1. In Windows Explorer, open
%userprofile%\Documents\WindowsPowerShell
  1. Open PS file for edit, then add the following lines after the commentary header
If ($env:PSModulePath.Split(';') -contains "C:\Program Files\WindowsPowerShell\Modules" -and ([Environment]::Is64BitProcess)) {
    Add-Type -Path 'C:\Program Files\WindowsPowerShell\Modules\ISEModuleBrowserAddon\1.0.1.0\ISEModuleBrowserAddon.dll'
    Write-Host 'Loaded 64-bit version'
    }
else {
    Add-Type -Path 'C:\Program Files (x86)\Microsoft Module Browser\ModuleBrowser.dll'
    Write-Host 'Loaded 32-bit version'
    }
  1. Save changes and start the 64-bit PowerShell ISE.
Whole profile script:
#Module Browser Begin
#Version: 1.0.0
If ($env:PSModulePath.Split(';') -contains "C:\Program Files\WindowsPowerShell\Modules" -and ([Environment]::Is64BitProcess)) {
    Add-Type -Path 'C:\Program Files\WindowsPowerShell\Modules\ISEModuleBrowserAddon\1.0.1.0\ISEModuleBrowserAddon.dll'
    Write-Host 'Loaded 64-bit version of Module Browser'
    }
else {
    Add-Type -Path 'C:\Program Files (x86)\Microsoft Module Browser\ModuleBrowser.dll'
    Write-Host 'Loaded 32-bit version of Module Browser'
    }
$moduleBrowser = $psISE.CurrentPowerShellTab.VerticalAddOnTools.Add('Module Browser', [ModuleBrowser.Views.MainView], $true)
$psISE.CurrentPowerShellTab.VisibleVerticalAddOnTools.SelectedAddOnTool = $moduleBrowser
#Module Browser End
So finally, it works in a 64-bit ISE.
Now there's an issue with the 32-bit version of ISE.
For some reason, it loads the 32-bit version of Module Browser for Windows PowerShell ISE (x86), but still it results in the notorious issue with Module Browser being unable to get the NuGet package.

Convert AzureRM to AZ

It has been announced by Microsoft that the brand new Azure Az module will represent the de facto standard for connecting to Azure cloud infrastructures. The truth is that all of your already existing scripts will still work due to the fact that aliases can be enabled. From a technical standpoint it is a good short term solution but that doesn't really bring the idea of future-proofing with itself.
You can enable this short term solution by running:
Enable-AzureRmAlias
Please note that you cannot do this if you have code in your script that imports the old AzureRM module. That will obviously conflict with the aliases of the new Az module. In cases where you still need to use the old AzureRM in your environment, please run:
Disable-AzureRmAlias
To disable all the aliases for the cmdlets.
If you take a closer look at the repository which the Az module is based on (Azure/azure-powershell) you'll see that there is a file called Mappings.json inside the folder src/Accounts/Accounts/AzureRMAlias.  
We can directly download this file like this:
$Mappings = ((Invoke-WebRequest https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Accounts/Accounts/AzureRmAlias/Mappings.json -UseBasicParsing).Content | ConvertFrom-Json)
The mappings variable should now contain a list of Azure related objects. We can now iterate over each object in the root to get a list of all mappings like this:
($Mappings | Get-Member -MemberType NoteProperty) | % {
    $Mappings.$($_.Name) | % {
        ForEach ($Mapping in ($_ | Get-Member -MemberType NoteProperty)) {
            Write-Host $_.$($Mapping.Name) "=>" $Mapping.Name
        }
    }
}
This will output a list of mappings in a readable format. We can use this to create a script that replaces the old cmdlets with new ones. The final script looks like this:
$ScriptFile = "C:\Users\Bart\Desktop\script.ps1"
$Script = (Get-Content $ScriptFile -Raw)

($Mappings | Get-Member -MemberType NoteProperty) | % {
    $Mappings.$($_.Name) | % {
        ForEach ($Mapping in ($_ | Get-Member -MemberType NoteProperty)) {
            $Script = $Script -replace $_.$($Mapping.Name),$Mapping.Name
        }
    }
}

$Script | Set-Content $ScriptFile
This should work, but please note that this only replaces the cmdlet names in your script. Just to be sure, run your scripts to make sure that they still work like they used to do. Do you have suggestions for other readers and/or me? Feel free to leave a comment, all knowledge is welcome.