Jack Yasgar has been developing software for various industries for two decades. Currently, he utilizes C#, JQuery, JavaScript, SQL Server with stored procedures and/or Entity Framework to produce MVC responsive web sites that converse to a service layer utilizing RESTful API in Web API 2.0 or Microsoft WCF web services. The infrastructure can be internal, shared or reside in Azure.
Jack has designed dozens of relational databases that use the proper primary keys and foreign keys to allow for data integrity moving forward.
While working in a Scrum/Agile environment, he is a firm believer that quality software comes from quality planning. Without getting caught up in analysis paralysis, it is still possible to achieve a level of design that allows an agile team to move forward quickly while keeping re-work to a minimum.
Jack believes, “The key to long term software success is adhering to the SOLID design principles. Software written quickly, using wizards and other methods can impress the business sponsor / product owner for a short period of time. Once the honeymoon is over, the product owner will stay enamored when the team can implement changes quickly and fix bugs in minutes, not hours or days.”
Jack has become certified by the Object Management Group as OCUP II (OMG Certified UML Professional) in addition to his certification as a Microsoft Certified Professional. The use of the Unified Modeling Language (UML) provides a visual guide to Use Cases and Activities that can guide the product owner in designing software that meets the end user needs. The software development teams then use the same drawings to create their Unit Tests to make sure that the software meets all those needs.
The QA testing team can use the UML drawings as a guide to produce test cases. Once the software is in production, the UML drawings become a reference for business users and support staff to know what decisions are happening behind the scenes to guide their support efforts.
error MSB3644 – An error on Azure Pipelines if your .NET Framework in your project is too old for Visual Studio 2022.
I hadn’t run a pipeline for a database project for several months since I was working on other projects. I received the failure message: C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Current\Bin\Microsoft.Common.CurrentVersion.targets(1220,5): error MSB3644: The reference assemblies for .NETFramework,Version=v4.5 were not found. To resolve this, install the Developer Pack (SDK/Targeting Pack) for this framework version or retarget your application. You can download .NET Framework Developer Packs at https://aka.ms/msbuild/developerpacks.
I was suspicious when I saw the reference to \2022\ in the path which turned out to be the cause of the issue. Azure pipelines were upgraded to use Visual Studio 2022 build scenarios.
I checked my SQL Project and found that it was set to .NET Framework 4.5. This is too old for Visual Studio 2022. I updated it to .NET Framework 4.7.2 and recompiled to make sure it didn’t cause any issues.
I checked it in and merged. The job ran succuessfully.
String Extension Methods in C# .NET can make life so much easier.
String Extension Methods in C# .NET can make life so much easier. Everyday functions that used to require extended syntax with a return value are a thing of the past. Now with Microsoft Visual Studio, we can just add a using to our Extensions collection and have full use of them from our Share Library.What is a String Extension Method?
An extension method is simply an additional method. It is a way of attaching additional functionality to a type which is available to you throughout your code without need to instantiate another class.
There’s plenty of best practices for extension methods. A great article is here on Microsoft’s site.
ToBoolean() Extension Method
Probably my most used and handiest is a simple one. There are so many times when we receive text, whether in a JSON payload or in a view that we need to see if it is a legitimate boolean value.
Here is a simple implementation of ToBoolean()
/// <summary>
/// Convert a string to a boolean
/// Yasgar Technology Group, Inc.
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static bool ToBoolean(this string value)
{
if (value == null) { return false; }
if (value.Trim().ToLower() == "true" || value == "1" || value == "yes")
{ return true; }
else
{ return false; }
}
This is a simple implementation that does a quick compare against a set of strings.
IsNumericInteger() Extension Method
Often, during a view post back, I need to determine if a particular value is numeric that was accepted in a text box. While I usually try to validate this type of input using javascript, there are many ways that people can bypass that validation. I use this specific one to validate that this value is indeed an integer and not a decimal.
/// <summary>
/// Return bool whether the value in the string is a numeric integer
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static bool IsNumericInteger(this string value)
{
return long.TryParse(value, out long _tempvalue);
}
Here is a simple sample implementation that does a quick TryParse() to see if it is a pass or fail.
IsNumericDecimal() Extension Method
Often there are numeric fields that you’re receiving via a post back or JSON or XML payload. This is a quick way to determine if it’s a decimal fit or not. Remember, that integers will pass this test as well. So use the IsNumericInteger() extension method if you want to determine if the numeric value has a decimal in it.
/// <summary>
/// Return bool whether the value in the string is a numeric decimal
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static bool IsNumericDecimal(this string value)
{
return decimal.TryParse(value, out decimal _tempvalue);
}
ToDateFromCCYYMMDD() Extension Method
There are often cases where dates are passed around in CCYYMMDD format, such as 20220329. This is a preferred method for me when I need to transfer a date as a query string parameter argument and don’t want the mess of a full DateTime. This extension method converts that string to a DateTime object.
/// <summary>
/// Convert a string in CCYYMMDD format to a valid date
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static DateTime ToDateFromCCYYMMDD(this string value)
{
if (!string.IsNullOrWhiteSpace(value))
{
if (value == "99999999")
{
return DateTime.MaxValue;
}
else
{
string _value = value.Trim();
if (_value.IsNumericInteger() && _value.Trim().Length == 8)
{
int.TryParse(_value.Substring(0, 4), out int year);
int.TryParse(_value.Substring(4, 2), out int month);
int.TryParse(_value.Substring(6, 2), out int day);
DateTime datItem = new DateTime(year, month, day);
return datItem;
}
}
}
return DateTime.MinValue;
}
Notice that I check for the “99999999” string. This is a very popular marker for “no expiration” date, especially in mainframe data.
ToDateFromString() Extension Method
This is a variation on the ToDateFromCCYYMMDD() extension method. You might ask why I would have an extension method that probably does the same thing as DateTime.TryParse()? Well, simple, I’ve worked with lots of data where they have dates like “99999999” and “99/99/9999” which I want to handle properly.
/// <summary>
/// Convert a string in MM/DD/CCYY format to a valid date
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static DateTime ToDateFromString(this string value)
{
if (!string.IsNullOrWhiteSpace(value))
{
if ((value == "99999999") || (value == "99/99/9999"))
{
return DateTime.MaxValue;
}
else
{
if (!string.IsNullOrWhiteSpace(value))
{
DateTime.TryParse(value, out DateTime _value);
return _value;
}
else
{
return DateTime.MinValue;
}
}
}
return DateTime.MinValue;
}
Notice it does use the standard DateTime.TryParse(), but only after it checks for funky dates. You may also want to put in checks for dates that are popular in your environment, such as the old SQL Server minimum date of “1/1/1753”
Trim(int MaxLength) (Accepting a maximum length)
This extension method accepts an integer specifying the maximum length of the returned string. I use this method all the time, especially when stuffing data into old data tables where data needs to be truncated. To be honest, I find it hard to believe that after all this time, it’s still not an overload in the framework.
/// <summary>
/// Trim a string down to a particular size
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static string Trim(this string value, int p_MaxLength)
{
try
{
if (value != null)
{
return value.Substring(0, Math.Min(p_MaxLength, value.Length));
}
else
{
return string.Empty;
}
}
catch (Exception)
{
throw;
}
}
Remember, you should never just randomly trim data without assessing whether it is going to cause data corruption. Important data should not be truncated with a method like this unless you’re logging the activity somewhere.
This is one of my favorite extension methods, not only because I use it so often when validating data, but because it’s proven to be so versatile that I haven’t had to modify very much over the years. This method accepts two parameters to allow you to keep dashes and hash signs in the return if you want. They both default to false if you don’t set them.
/// <summary>
/// Remove special characters from a string with option to
/// retain Dashes and Hash signs
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <param name="dashOkay"></param>
/// <param name="hashOkay"></param>
/// <returns></returns>
public static string RemoveSpecialChars(this string value,
bool dashOkay = false,
bool hashOkay = false)
{
try
{
StringBuilder sb = new StringBuilder();
if (value != null)
{
if (dashOkay && hashOkay)
{
foreach (char c in value)
{
if ((c >= '0' && c <= '9') ||
(c >= 'A' && c <= 'Z') ||
(c >= 'a' && c <= 'z') ||
c == '-' ||
c == '#' ||
c == ' ')
{
sb.Append(c);
}
}
}
else if (dashOkay && hashOkay == false)
{
foreach (char c in value)
{
if ((c >= '0' && c <= '9') ||
(c >= 'A' && c <= 'Z') ||
(c >= 'a' && c <= 'z') ||
c == '-' || c == ' ')
{
sb.Append(c);
}
}
}
else if (dashOkay == false && hashOkay)
{
foreach (char c in value)
{
if ((c >= '0' && c <= '9') ||
(c >= 'A' && c <= 'Z') ||
(c >= 'a' && c <= 'z') ||
c == '#' || c == ' ')
{
sb.Append(c);
}
}
}
else if (!dashOkay && !hashOkay)
{
foreach (char c in value)
{
if ((c >= '0' && c <= '9') ||
(c >= 'A' && c <= 'Z') ||
(c >= 'a' && c <= 'z') ||
c == ' ')
{
sb.Append(c);
}
}
}
}
return sb.ToString();
}
catch (Exception)
{
throw;
}
}
RemoveSpaces(bool StripInternal = false)
The standard Trim() extension method in the .NET framework will remove spaces from the beginning and end, but does it remove spaces inside the string? No, of course not. But there are times when that is needed and I have just the method read for it. It also trims the front and back as well, so no need to do an extra Trim() on it.
/// <summary>
/// Strip spaces from a string
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <param name="StripInternal">strip spaces from within the string</param>
/// <returns></returns>
public static string RemoveSpaces(this string value, bool StripInternal = false)
{
if (!string.IsNullOrWhiteSpace(value))
if (StripInternal)
{
return new string(value.ToCharArray()
.Where(c => !Char.IsWhiteSpace(c))
.ToArray());
}
else
{
return value.Trim();
}
else
{
return string.Empty;
}
}
ToDecimal() Extension Method
If you need to retrieve a decimal value from a string, you can use this extension method. It will actually return a nullable decimal (decimal?). It will be null if the value could not be coerced into a decimal. This one could be used in place of the IsNumericDecimal() method if you need to retrieve the value and not simply pass it on if it validates. There is the extra step to check whether the return value is null though.
/// <summary>
/// Convert a string to a Decimal, return null if fails
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static decimal? ToDecimal(this string value)
{
if (decimal.TryParse(value, out decimal decItem))
{ return decItem; }
else
{ return null; }
}
These are two powerful extension methods that I group together. They come in so handy for encrypting and decrypting values on the fly. While it’s probably not the greatest plan to use this for an encryption strategy, I often use them while data is in flight. For instance, I parse a text file and save it to a staging database table for later processing. If there is Protected Health Information (PHI), or even Personally Identifiable Information (PII), then I’ll use this method to protect it from prying eyes before it winds up in its final resting place.
Both of these extension methods make use of the CRijndael namespaces in the framework.
ToProperCase()
How often do we need to convert a standard text string to proper or title case. All the time! So this is a solution for your needs!
using System.Globalization;
using System.Threading;
/// <summary>
/// Convert a string to Proper case
/// based on the current thread culture
/// Yasgar Technology Group, Inc. - www.ytgi.com
/// </summary>
/// <param name="text"></param>
/// <returns></returns>
public static string ToProperCase(this string text)
{
if (text != null)
{
CultureInfo _cultureInfo = Thread.CurrentThread.CurrentCulture;
TextInfo textInfo = _cultureInfo.TextInfo;
// Send in the text as lower case to allow the method to
// make all the decisions
return textInfo.ToTitleCase(text.ToLower());
}
else
{
return text;
}
}
If you love Extension Methods, take a look at some other posts I have about others:
I can’t believe that I didn’t write this post two years ago when I figured this out. I apologize to everyone that had to figure this out on their own.
When I first tried to create NuGet packages from my apps and upload them, I wasn’t using Jenkins. But no matter, the concepts were the hard part, not the application that was doing the creation and uploading (push).
Scenario
Your doing software development and have one or more components that you want to re-use in many projects. Whether it’s a collection of extensions or models for an API. The goal is to have a versioned download that other developers can grab which is versioned and easily added to their application. I won’t bore you here with why this is a good idea, even when many corporate developers argue about what a pain it is to use NuGet packages for this.
The main concern is that you don’t want this proprietary code to be publicly available on NuGet.org. The three choices that I’ve worked with are File Share, Azure Artifacts and GitHub Packages. I’m not going to discuss Azure Artifact here, because they actually are a little easier, because if your using Azure Devops, then the authentication is coordinated, where as in GitHub, it’s not so easy.
I spent many hours trying get this to work. After a few hours, I actually got a NuGet package to build and then it would upload to GitHub. Then the next day I would try it and it would fail again. I opened a ticket with GitHub and had a few email exchanges. The last email basically said “If you figure it out, let us know how you did it.” Well, I have to admit that I never did, as much as I don’t like the saying that “It’s not my job”, I’m not being paid to educate GitHub support staff.
GitHub Support: “If you figure it out, let us know how you did it.”
Anyway, enough with the small talk, let’s get down to it.
Concepts
I need to state that this solution applies to all versions of .NET Core and .NET 5.x versions that I’ve been using for a few years. If you’re trying to do this with older .NET Framework versions, then some of this may not apply.
There are three major things you’ll have to be aware of when creating a NuGet package.
Your Project File
The default project file in .NET Core or .NET 5.x is not sufficient to create a NuGet package, unless you hard code all the data in the {ApplicationName}.nuspec file covered below. I recommend embellishing the csproj file.
There are several things that a NuGet package requires, id (PackageId) , title, etc. being a few. You need to make sure that all this data is in your csproj file. I have a sample below:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>library</OutputType>
<TargetFramework>net5.0</TargetFramework>
<PackageId>YTG-Framework-V2</PackageId>
<Version>2.1.1</Version>
<Authors>Jack Yasgar</Authors>
<Company>Yasgar Technology Group, Inc.</Company>
<PackageDescription>Shared resources for all Yasgar Technology Group applications</PackageDescription>
<RepositoryUrl>https://github.com/YTGI/YTG-Framework-V2</RepositoryUrl>
<Description>Shared framework methods and objects used throughout the enterprise.</Description>
<Copyright>@2021 - Yasgar Technology Group, Inc</Copyright>
<AssemblyVersion>2.1.1.0</AssemblyVersion>
<FileVersion>2.1.1.0</FileVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Logging" Version="3.1.3" />
<PackageReference Include="Microsoft.Extensions.Logging.Configuration" Version="3.1.3" />
</ItemGroup>
</Project>
Almost every line in this file is important. The “MOST” important one is the “Version” key highlighted. When you first get your upload (nuget push) to work, it will work fine, but it will get an exception the second time of you don’t increment the version. The NuGet upload process ignores the “AssemblyVersion” and “FileVersion”.
{ApplicationName}.nuspec
This is the file that the nuget packager actually looks at to create a NuGet package. The file has a variable base syntax that will pull data from the csproj file, hence my recommendation that you use the csproj file as the source of truth. You have the option of hard coding values in here if you wish. Why use the variables you ask? Because, if you use the csproj file as the source of truth, then your {ApplicationName}.nuspec can have the same content in every project you have. I think that makes this process simpler if you plan to have several NuGet packages.
As you can see above in my live example, the only thing you might want to adjust is the <tags> entry. All the rest will pull from the project file as variables.
Now for the next tricky part. Here’s where your existing knowledge may hurt you. If you said to yourself that you don’t need a NuGet.config file in your project, your right, if you just want to fetch packages. But if your creating packages on Azure Pipelines or GitHub Actions, you’ll need it. If you’re creating your package on Jenkins, then you can just have a reusable NuGet.config in the C:\Users\{serviceid}\AppData\Roaming\NuGet folder. {serviceid} is the service account that Jenkins is running as, often LocalSystem, which means the {serviceid} = “Default”.
Okay, so now our project is ready for for a NuGet package build. So how do we get it pushed up. Well, that’s different depending on your environment. I’ll show you a few.
Command Line
First thing is to make sure your NuGet package gets created. You can do that in the CLI (Command Prompt):
This should build your project and put the NuGet package in a folder called “nupkgs”. If this doesn’t work, you need to review all the messages and review the above configuration steps. Remember, this process is not concerned that you’re in a hurry.
If you wind up with a .nupkg file in the folder, then pat yourself on the back, you’re most of the way there. Here’s where I ended up opening a ticket with GitHub, because I had a nupkg, but I couldn’t get it to upload consistently. It turned out that I, and GitHub support, didn’t understand the different credentials.
I and GitHub support didn’t understand the different credentials.
In your NuGet.confg, there are two different sections.
The “packageSourceCredentials” are exactly what they say, it is the credentials to READ the nuget packages. So that’s where I got sidetracked, it has nothing to do with uploading the package after it’s created.
In order to actually PUSH (upload) a file, you need to have apikeys credentials. It DOES NOT use your regular GitHub access in the “packageSourceCredentials ” to upload packages. That means you have to have another section in your nuget.config file that give you access to push files. This is the ef’d up thing, it is often the exact same credentials, but just in a different place.
RECOMMENDATION: You should create a different account that has is very generic to create API access. This will mitigate the issue of a developer leaving the team that has all the tokens under their account for push access.
GitHub Token
In order to get the proper credentials to use for push (uploading) the nuget package, you should log in as your generic (DevOps) account in GitHub if you have one. If not, use your current account. MAKE SURE YOU GO INTO THE “Settings” FROM YOUR ACCOUNT ICON IN THE UPPER RIGHT CORNER and not the “Settings” on the account or project menu.
Once you’re in there, click “Personal access tokens” and click “Generate new token”.
Give your new token a name, select the expiration days, I suggest 90, but “No expiration” is your decision.
Select the following options:
workflow
write:packages
read:packages
delete:packages
admin:org
admin:write:org
admin:read:org
user:user:email
Make sure you copy and SAVE the token, because you will never be able to see it again!
You can encode the password that needs access to push the NuGet package with the following command. Please note that if you don’t put the -ConfigFile entry, then it will update the version of the NutGet.config file in the C:\users folder and NOT the version in your project config. This causes the issue were it works when you do it, but all downstream deployments fail, i.e.: Jenkins, Azure etc.
The command to add this to your project nuget.confg file is:
I’m going to repeat that you must notice the -Configfile param. If you don’t add it, the command will update the nuget.config in our current user \roaming\nuget folder, which is not what you want if this project is being deployed from a different device, i.e. Jenkins or Azure.
Notice the -SkipDuplicate argument in the above CLI command. It will cause the push (upload) command to ignore the fact that you’re trying to upload a duplicate version instead of raising an error that will fail a build. Just keep in mind that if you forget to change your version in the csproj file, your change will not show up. It is against convention to make changes to an existing version, as that could cause real problems for consuming applications. If you made a mistake, or left something out and you want to get rid of the version you just built, you’ll need to go into GitHub packages, Versions and delete the version you just built to upload it again with the same version number.
If this command works, then you have setup your nuget.config file correctly and you’re good to go.
NOTE: It will sometimes take up to a minute or so for the package to show up in the GitHub Packages list, so be patient and don’t assume it didn’t work until you give it some time.
There are many times that I wanted to be able to quickly update the property values in a collection without needing to create a foreach loop. Sometimes it’s because I needed to do it within a larger query, other times, just because it’s a relatively simple update and like being able to do it in one line of code.
Take for instance this example. I have a list of objects and I want to add a counter value to each. I’m doing this because they collection is sorted, but later processing is threaded, so they come out of that method unsorted again. I wanted a way to quickly get them sorted again so I didn’t have to pass around the sortColumn and sortOrder properties.
You can easily call a method from within your code as well, but just keep in mind that this runs synchronously. If the method is simple, we could rewrite the above like:
Warning NU1701 Package ‘Microsoft.AspNet.Mvc 5.2.7’ was restored using ‘.NETFramework,Version=v4.6.1, .NETFramework,Version=v4.6.2, .NETFramework,Version=v4.7, .NETFramework,Version=v4.7.1, .NETFramework,Version=v4.7.2, .NETFramework,Version=v4.8’ instead of the project target framework ‘net5.0’. This package may not be fully compatible with your project.
When I create an MVC project in Visual Studio, it automatically put a reference to Microsoft.AspNet.Mvc. Since I’m now using ASP.NET 5, I started to see several of these warnings stacked in my Warning list.
I right click on my solution and selected “Manage NuGet Packages for Solution…”
On the left, highlighted “Microsoft.AspNet.Mvc”, made sure the project(s) that have it referenced have a check mark next to them on the right.
Clicked “Uninstall”
Searched NugGet for “Microsoft.AspNetCore.Mvc” and installed it to the same projects I just removed the other.
I had to remove “using System.Web.Mvc” references. These broke some “AllowHtml” decoration attributes that I had to remove, as they are no longer needed in .NET Core.
I also had to edit the project file for the main web project and remove the following line:
There are times when you need to get records in one table with a foreign key to a related one to many table. This is a difficult need to describe, so I’ll give you the exact business scenario.
I have designed and used a Process Tracking system for many years. It currently has two basic components in the database:
A FileProcess table that tracks a file (name, date, paths, app that processed it, etc.)
A StatusLog table that I punch in records as this file goes through the process of being imported, validated, etc.
Often, I have multiple applications that process a batch of records from a file. I designed a stored procedure that would allow me to check for any file, by a particular application, that was in a particular status, but not past that status.
So here’s the scenario, we have a process that I have assigned the following status log values:
10 – File Parsed 20 – File Imported 30 – Data Validated 40 – Data Archived
Ok, so one application parses the file and imports it, let’s say it’s an SQL SSIS package just for fun. So it punches two status records in while it’s working, a 10 and a 20.
So now I have another validation application that checks every few minutes for something to do. I want it to be able to find any file that is in a status of 20, but NOT higher than that. So then I know it’s ready to be validated.
In order to do this, I have the following LINQ to SQL query that seems to do the job for me. I hope looking at this code will help you with whatever similar type of issue you’re trying to solve:
public async Task<List<FileProcess>> GetFileProcessesForAStatusByAppIdAsync(int AppId, int StatusId)
{
try
{
var _entityrows = (from st in _appLogContext.StatusLogs
join fp in _appLogContext.FileProcess.Include(a => a.App) on st.FileProcessId equals fp.Id
where st.AppId == AppId
&& st.StatusId == StatusId
&& st.StatusId == (_appLogContext.StatusLogs.Where(f => f.FileProcessId == fp.Id).OrderByDescending(p => p.StatusId).FirstOrDefault().StatusId)
select fp).AsNoTracking();
return await _entityrows.ToListAsync();
}
catch (Exception)
{
throw;
}
}
For those of you that are database jockeys, here’s the SQL code that this replaces:
@AppId AS INT = NULL,
@StatusId AS INT = NULL
SELECT
[Id],
[AppId],
[FileName],
[DateProcessed],
[Inbound]
FROM
[FileProcess]
WHERE
Id IN (
SELECT
s.FileProcessId
FROM
(SELECT DISTINCT MAX(StatusId)
OVER(PARTITION BY FileProcessId)
AS ProperRow, FileProcessId, AppId
FROM StatusLogs) AS s
WHERE
s.ProperRow = @StatusId
AND AppId = @AppId
)
When using Entity Framework (EF) Core, by default, EF Core will track any records that it pulls from the database so that it can tell if it has changes when you go to save it again. If you attempt to add the same record again etc, it will complain with a “The instance of entity type cannot be tracked because another instance with the same key value for {‘Id’} is already being tracked” error.
If you do N-Tier development, then having EF track your objects in the Repository or DataLayer of your API is of no use. It will start to cause problems when you go to save the object through a different endpoint that has created a copy of the repository model and a SaveChanges() is attempted.
In order to work around this, you can declare the Dependency Injected (DI) instance of your DB context to not use Query Tracking by using this type of code in your Startup.cs.
Using Dependency Injection can have challenges, along with rewards.
System.InvalidOperationException: Cannot consume scoped service
Copyright 2020 Microsoft 🙂
This error occurred when I modified my AppLogging REST Service to have an internal service that logged errors directly to the database. Can’t have the AppLogging Service call itself if there’s an error right?
After the modification, I recieved the following error:
System.InvalidOperationException: Cannot consume scoped service 'Enterprise.Logging.Repository.Context.AppLogContext' from singleton 'WF.Library.Shared.Logging.IAppLocalLoggingSvc`1[Enterprise.Logging.App.Rest.Controllers.AppMastersController]'.
After some head tapping, I realized that I had modified the internal service class to now accept the DBContext, so that I could log errors directly to the database.
public AppLoggingSvc(AppLogContext appLogContext, IOptionsMonitor<WFAppSettings> appSettings)
I had the Dependency Injection (DI) setup like:
// Add DI reference to AppLoggingSvc that is a generic type
services.AddSingleton(typeof(IAppLocalLoggingSvc<>), typeof(Services.AppLoggingSvc<>));
I found the problem was that when you use AddDBContext to add the Database Context to your Dependency Injection collection, it is added as “Scoped”. So I was adding my IAppLocalLoggingSvc as a Singleton, but it was accepting a DI component in the constructor that was Scoped. These two scenarios are incompatible.
I found that using AddTransient resolved the issue:
// Add DI reference to AppLoggingSvc that is a generic type
services.AddTransient(typeof(IAppLocalLoggingSvc<>), typeof(Services.AppLoggingSvc<>));
Thanks for reading! Happy Coding.
Full Error Listing:
System.InvalidOperationException: Cannot consume scoped service 'Enterprise.Logging.Repository.Context.AppLogContext' from singleton 'WF.Library.Shared.Logging.IAppLocalLoggingSvc`1[Enterprise.Logging.App.Rest.Controllers.AppMastersController]'.
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.VisitScopeCache(ServiceCallSite scopedCallSite, CallSiteValidatorState state)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(ServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.VisitConstructor(ConstructorCallSite constructorCallSite, CallSiteValidatorState state)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSiteMain(ServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.VisitRootCache(ServiceCallSite singletonCallSite, CallSiteValidatorState state)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(ServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.ValidateCallSite(ServiceCallSite callSite)
at Microsoft.Extensions.DependencyInjection.ServiceProvider.Microsoft.Extensions.DependencyInjection.ServiceLookup.IServiceProviderEngineCallback.OnCreate(ServiceCallSite callSite)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.CreateServiceAccessor(Type serviceType)
at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.GetService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ActivatorUtilities.GetService(IServiceProvider sp, Type type, Type requiredBy, Boolean isDefaultParameterRequired)
at lambda_method(Closure , IServiceProvider , Object[] )
at Microsoft.AspNetCore.Mvc.Controllers.ControllerActivatorProvider.<>c__DisplayClass4_0.<CreateActivator>b__0(ControllerContext controllerContext)
at Microsoft.AspNetCore.Mvc.Controllers.ControllerFactoryProvider.<>c__DisplayClass5_0.<CreateControllerFactory>g__CreateController|0(ControllerContext controllerContext)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
A few months ago, I was enabling paging on a .NET Core 3.1 MVC application and had my search model passed into a controller method via AJAX. Well, it didn’t work. I received a NULL DTO object no matter what I tried. Trying to figure out what to do about an MVC Ajax JSON Null DTO in a controller method had me chasing my tail.
Fast forward to a few days ago, and guess what, another web app, same use case, same issue. Problem was, I couldn’t remember how I resolved it. Well, after another two hours of tinkering around with different objects, removing default settings in my DTO, and more endless googling, I finally found the issue… again.
Main issue I had is that System.Text.Json is not really usable. I found out that unless all your properties are strings, you have to setup custom comparers for each type. That about sums it up. Unless you’re passing in a very simple object that only has string properties, you can pretty much forget about using this library out of the box.
For those of you in a hurry, here is a summary of what I did. Details of the implementation will follow:
Make sure you have “FromBody” in your controller method. I already had this, but it’s what most blog posts focus on.
[HttpPost]
public async Task<IActionResult> CatDisplay([FromBody] SearchModel<LuCategory> searchModelIn)
2. Change the default JSON serializer in your Startup.cs
using Microsoft.AspNetCore.Mvc;
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews().AddNewtonsoftJson();
If you get the little squigglies under this method name, then add the Nuget package: Microsoft.AspNetCore.Mvc.NewtonsoftJson
Just so you can see how I’m calling this, here is the Javascript/Jquery/JSON that I’m sending in:
function GetPaging(ToPage) {
var _url = "/@Model.controllerName/@Model.actionName";
// Set the global values for sorting post back
var searchModel = {};
searchModel.SortColumn = '@Model.SortColumn';
searchModel.PrevSortColumn = ''; // Leave blank so sorting doesn't kick;
searchModel.CurrentPage = ToPage;
searchModel.PageSize = @Model.PageSize;
searchModel.SearchTerm = '@Model.SearchTerm';
searchModel.SearchFilter = '@Model.SearchFilter';
searchModel.SortDescending = '@Model.SortDescending';
searchModel.ActiveOnly = '@Model.ActiveOnly';
searchModel.RefId = @Model.RefId;
searchModel.RefUniqueId = '@Model.RefUniqueId';
$.ajax({
type: "POST",
url: _url,
async: true,
contentType: "application/json",
data: JSON.stringify(searchModel),
dataType: "html",
success: function (result, status, xhr) {
$("#gridPartial").html(result)
},
error: function (xhr, status, error) {
alert("Result: " + status + " " + error + " " + xhr.status + " " + xhr.statusText)
}
});
}
3. The last problem I ran into was boolean values. In the above example, the boolean value was coming from the Model, so there is no issue. However, if you are trying to get a boolean value from javascript or jquery, big problems. In order to be sure that what is being passed as a value can be deserialized into an object, you should have code like:
Any senior developer is going to want to create a library of commonly used features that will be shared among MuleSoft applications. In Mule, we create a common, shared project that has reusable components and flows. Domain projects don’t allow reusable flows, so this is another great option.
Updated April 12, 2024
This can include many things, the most popular being:
Global Error Handling
Global Logging
Common Business Logic Flows
This activity was completed using the following versions:
Mule 4.2.2
AnyPoint Studio 7.4.2
Mule Mavin Plugin 3.3.5
I will show as much of the setup in AnyPoint studio UI as possible. We’ll only edit the files manually when necessary.
First, create a Mule project that you want to have your shared code in. I called my project “mule-common-flows”. If you commonly use a domain application, DO NOT use it for this project. This project will not run on it’s own, it will only be referenced by other projects that, in turn, can be domain based projects.
I added two Mule configuration files. One that will have Error Handlers, and another that will have a re-usable flow.
Once you’re happy with your re-usable components, we need to edit the POM.XML file. We need to add a “classifier” tag in the “org.mule.tools.mavin” plugin section like so:
If you receive the message: “The packaging for this project did not assign a file to the build artifact”, then you may be using the mule maven plugin version 3.8.x or later, you’ll need to remove the package option, like so:
C:\Source\mule-common-flows>mvn clean install
You should receive a “BUILD SUCCESS” message. If you don’t, you’ll have to resolve whatever issues you have before continuing. This will build a JAR file and place it in the /target folder in the project.
Leave this project open in your Package Explorer, although you can close all the XML file tabs.
In the project in which you wish to reference the shared items, we have to manually edit the POM.XML file again. Add a section in the dependencies section like below:
The dependency property values need to match the values in the POM.XML of the mule-common-flows project. The “classifier” property is additional.
Next, open one of the Mule Configuration Files in your project that is going to consume the mule-common-flows.
Click on the “Global Elements” tab.
Click on the “Create” button.
Navigate to Global Configurations – Import and click “OK”
DO NOT click on the ellipses to select a file. The file is already referenced in the JAR file of the mule-common-flows project. So you need to just manually enter the name of the configuration file. I’m going to add two, one for my global error handler and the other for the shared flows. Better to copy and paste the names to be sure. I put the names of the files in the Notes section too, so that they show up in the summary screen.
Your Global Configuration Elements should look something like the below. You can see how entering the names of the files in the notes is helpful.
Save all your changes.
Now to test them out. I’m going to setup my project to reference my new and shiny Global Error Handler. Click on “Create” again. Expand “Global Configuration” and click on “Configuration” and click “OK”.
Your Global Error Handler should now be in the drop down. If it’s not, then make sure you’ve saved everything. You can try closing and re-opening the project as we often need to do.
Click “OK” to save those changes.
Now, let’s go to the “Message Flow” tab. Drag a “Flow Reference” to the canvas if one doesn’t already exist.
Click on your “Flow name” drop down and you should now see our flow from the mule-common-flows project like highlighted below:
There you did it! Now go show off to all your colleagues.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.