Detecting which button was clicked to cause a post back is very easy, once you know how to do it!
I often have screens that have multiple submit buttons on a single form that require a post back. I needed to pass the button value to the controller in MVC. For instance, I may have a data entry screen where a user can click a button that says “Save” or “Save as New”. I need to be able to detect which button they clicked when the page posts back.
Fortunately, there is an easy way to tell, or determine, which button the user selected when the page posts back on a submit action.
The buttons must be of type=submit. Having type=button won’t post back. You have a choice here, to use the value= or not use it. If you don’t declare a value attribute, then what you’ll receive in the controller is the text of the button. While this is okay, you or another developer may change the text in the future and not realize they are going to break your code. I recommend using the value= like I’ve used above. It’s less likely to change in the future.
I recommend using the value attribute as it’s less likely than the button text to change in the future.
The next most important part is the name attribute. Every button that will post back should have the same name. This will be the name of your parameter in your controller as well and they must match. The value you declare on the button will be the argument passed to your controller method.
public async Task<ActionResult> SearchBegin([FromForm] SearchPageModel _searchModelIn, [FromForm] string submitButton)
{
// If you "name" your buttons as "submitButton"
// then you can grab the value of the button
// here to make decisions on which button was clicked
switch (submitButton)
{
case "TopTen":
return TopTen(_searchModelIn);
case "Trad":
return Traditional(_searchModelIn);
default:
break;
}
return View("~/"); // Go home
}
The parameter name in your method must match the name attribute on your buttons exactly. The type passed in will be a string, although I imagine if your value attribute on all your buttons was numeric, that you could declare it as an int.
Once you’re in your method, you can use logic in a switch statement to detect the value passed in and make a decision how to proceed.
Using the Task Async/Await pattern for grabbing data can be a real performance enhancement. When you thread off the calls, it’s pretty normal to want Task return objects to be in one single usable collection. The example I can give is a method that needs to gather up several different categories of lookup items. These calls all return a collection of the same type.
When you await the tasks, you generally have a few options:
Not sure how you feel, but this is horrible. I’m sure I’ve done something like this in the past, but I’d prefer not to think about it.
Use WhenAll to retrieve them in an Array
The Task.WhenAll, when declared with a type, will return an array of the return type. So in this case, it would return an Array of List<LuItem>. We can then do a simple LINQ query to push them all into one collection.
In this example, we await the Task with WhenAll, which has a return type, as opposed to WaitAll which does not. As stated earlier, this example will return a collection as Task<List<LuItem>[]>. So we’re most of the way there. We use the ToList().ForEach LINQ query to transform the Array of Lists into a single list called _return.\
Summing a collection that is within a collection without using nested foreach loops can be easily done with LINQ
It’s hard to think of a good name for this post. But if you have a collection and each item has a collection of values that you need to get a sum on, you can do that easily with LINQ.
Say you have a List<CartItem> in a shopping cart. Each item has a list of DecimalCost, possibly the user has ordered different sizes or colors and they each have an associated cost.
I’m sure we’ve all experienced the great idea of looping through a collection and trying to remove an item from a collection that doesn’t need to be there. You’ll get the infamous “Collection was modified; enumeration operation may not execute”. You can create a new collection and add the ones you want to that one, but that’s extra overhead.
Collection was modified; enumeration operation may not execute
This is a method that you can use that is outside of a foreach loop:
In this example, I have an email named “entity” with a “TheBody” property that has a collection of Elements. The Elements have two properties, “Key” and “Value”, basically like a Dictionary entry. Creating a new list of elements means a new List<EmailElement> and then a .Clear and .AddRange, which kills more CPU cycles and milliseconds.
However, executing the above line will remove all the items from the collection that meet the criteria in the lambda.
There are many times that I wanted to be able to quickly update the property values in a collection without needing to create a foreach loop. Sometimes it’s because I needed to do it within a larger query, other times, just because it’s a relatively simple update and like being able to do it in one line of code.
Take for instance this example. I have a list of objects and I want to add a counter value to each. I’m doing this because they collection is sorted, but later processing is threaded, so they come out of that method unsorted again. I wanted a way to quickly get them sorted again so I didn’t have to pass around the sortColumn and sortOrder properties.
You can easily call a method from within your code as well, but just keep in mind that this runs synchronously. If the method is simple, we could rewrite the above like:
There are times when you need to get records in one table with a foreign key to a related one to many table. This is a difficult need to describe, so I’ll give you the exact business scenario.
I have designed and used a Process Tracking system for many years. It currently has two basic components in the database:
A FileProcess table that tracks a file (name, date, paths, app that processed it, etc.)
A StatusLog table that I punch in records as this file goes through the process of being imported, validated, etc.
Often, I have multiple applications that process a batch of records from a file. I designed a stored procedure that would allow me to check for any file, by a particular application, that was in a particular status, but not past that status.
So here’s the scenario, we have a process that I have assigned the following status log values:
10 – File Parsed 20 – File Imported 30 – Data Validated 40 – Data Archived
Ok, so one application parses the file and imports it, let’s say it’s an SQL SSIS package just for fun. So it punches two status records in while it’s working, a 10 and a 20.
So now I have another validation application that checks every few minutes for something to do. I want it to be able to find any file that is in a status of 20, but NOT higher than that. So then I know it’s ready to be validated.
In order to do this, I have the following LINQ to SQL query that seems to do the job for me. I hope looking at this code will help you with whatever similar type of issue you’re trying to solve:
public async Task<List<FileProcess>> GetFileProcessesForAStatusByAppIdAsync(int AppId, int StatusId)
{
try
{
var _entityrows = (from st in _appLogContext.StatusLogs
join fp in _appLogContext.FileProcess.Include(a => a.App) on st.FileProcessId equals fp.Id
where st.AppId == AppId
&& st.StatusId == StatusId
&& st.StatusId == (_appLogContext.StatusLogs.Where(f => f.FileProcessId == fp.Id).OrderByDescending(p => p.StatusId).FirstOrDefault().StatusId)
select fp).AsNoTracking();
return await _entityrows.ToListAsync();
}
catch (Exception)
{
throw;
}
}
For those of you that are database jockeys, here’s the SQL code that this replaces:
@AppId AS INT = NULL,
@StatusId AS INT = NULL
SELECT
[Id],
[AppId],
[FileName],
[DateProcessed],
[Inbound]
FROM
[FileProcess]
WHERE
Id IN (
SELECT
s.FileProcessId
FROM
(SELECT DISTINCT MAX(StatusId)
OVER(PARTITION BY FileProcessId)
AS ProperRow, FileProcessId, AppId
FROM StatusLogs) AS s
WHERE
s.ProperRow = @StatusId
AND AppId = @AppId
)
When using Entity Framework (EF) Core, by default, EF Core will track any records that it pulls from the database so that it can tell if it has changes when you go to save it again. If you attempt to add the same record again etc, it will complain with a “The instance of entity type cannot be tracked because another instance with the same key value for {‘Id’} is already being tracked” error.
If you do N-Tier development, then having EF track your objects in the Repository or DataLayer of your API is of no use. It will start to cause problems when you go to save the object through a different endpoint that has created a copy of the repository model and a SaveChanges() is attempted.
In order to work around this, you can declare the Dependency Injected (DI) instance of your DB context to not use Query Tracking by using this type of code in your Startup.cs.
Using Dependency Injection can have challenges, along with rewards.
System.InvalidOperationException: Cannot consume scoped service
Copyright 2020 Microsoft 🙂
This error occurred when I modified my AppLogging REST Service to have an internal service that logged errors directly to the database. Can’t have the AppLogging Service call itself if there’s an error right?
After the modification, I recieved the following error:
System.InvalidOperationException: Cannot consume scoped service 'Enterprise.Logging.Repository.Context.AppLogContext' from singleton 'WF.Library.Shared.Logging.IAppLocalLoggingSvc`1[Enterprise.Logging.App.Rest.Controllers.AppMastersController]'.
After some head tapping, I realized that I had modified the internal service class to now accept the DBContext, so that I could log errors directly to the database.
public AppLoggingSvc(AppLogContext appLogContext, IOptionsMonitor<WFAppSettings> appSettings)
I had the Dependency Injection (DI) setup like:
// Add DI reference to AppLoggingSvc that is a generic type
services.AddSingleton(typeof(IAppLocalLoggingSvc<>), typeof(Services.AppLoggingSvc<>));
I found the problem was that when you use AddDBContext to add the Database Context to your Dependency Injection collection, it is added as “Scoped”. So I was adding my IAppLocalLoggingSvc as a Singleton, but it was accepting a DI component in the constructor that was Scoped. These two scenarios are incompatible.
I found that using AddTransient resolved the issue:
// Add DI reference to AppLoggingSvc that is a generic type
services.AddTransient(typeof(IAppLocalLoggingSvc<>), typeof(Services.AppLoggingSvc<>));
Thanks for reading! Happy Coding.
Full Error Listing:
System.InvalidOperationException: Cannot consume scoped service 'Enterprise.Logging.Repository.Context.AppLogContext' from singleton 'WF.Library.Shared.Logging.IAppLocalLoggingSvc`1[Enterprise.Logging.App.Rest.Controllers.AppMastersController]'.
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.VisitScopeCache(ServiceCallSite scopedCallSite, CallSiteValidatorState state)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(ServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.VisitConstructor(ConstructorCallSite constructorCallSite, CallSiteValidatorState state)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSiteMain(ServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.VisitRootCache(ServiceCallSite singletonCallSite, CallSiteValidatorState state)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(ServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteValidator.ValidateCallSite(ServiceCallSite callSite)
at Microsoft.Extensions.DependencyInjection.ServiceProvider.Microsoft.Extensions.DependencyInjection.ServiceLookup.IServiceProviderEngineCallback.OnCreate(ServiceCallSite callSite)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.CreateServiceAccessor(Type serviceType)
at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.GetService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ActivatorUtilities.GetService(IServiceProvider sp, Type type, Type requiredBy, Boolean isDefaultParameterRequired)
at lambda_method(Closure , IServiceProvider , Object[] )
at Microsoft.AspNetCore.Mvc.Controllers.ControllerActivatorProvider.<>c__DisplayClass4_0.<CreateActivator>b__0(ControllerContext controllerContext)
at Microsoft.AspNetCore.Mvc.Controllers.ControllerFactoryProvider.<>c__DisplayClass5_0.<CreateControllerFactory>g__CreateController|0(ControllerContext controllerContext)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
A few months ago, I was enabling paging on a .NET Core 3.1 MVC application and had my search model passed into a controller method via AJAX. Well, it didn’t work. I received a NULL DTO object no matter what I tried. Trying to figure out what to do about an MVC Ajax JSON Null DTO in a controller method had me chasing my tail.
Fast forward to a few days ago, and guess what, another web app, same use case, same issue. Problem was, I couldn’t remember how I resolved it. Well, after another two hours of tinkering around with different objects, removing default settings in my DTO, and more endless googling, I finally found the issue… again.
Main issue I had is that System.Text.Json is not really usable. I found out that unless all your properties are strings, you have to setup custom comparers for each type. That about sums it up. Unless you’re passing in a very simple object that only has string properties, you can pretty much forget about using this library out of the box.
For those of you in a hurry, here is a summary of what I did. Details of the implementation will follow:
Make sure you have “FromBody” in your controller method. I already had this, but it’s what most blog posts focus on.
[HttpPost]
public async Task<IActionResult> CatDisplay([FromBody] SearchModel<LuCategory> searchModelIn)
2. Change the default JSON serializer in your Startup.cs
using Microsoft.AspNetCore.Mvc;
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews().AddNewtonsoftJson();
If you get the little squigglies under this method name, then add the Nuget package: Microsoft.AspNetCore.Mvc.NewtonsoftJson
Just so you can see how I’m calling this, here is the Javascript/Jquery/JSON that I’m sending in:
function GetPaging(ToPage) {
var _url = "/@Model.controllerName/@Model.actionName";
// Set the global values for sorting post back
var searchModel = {};
searchModel.SortColumn = '@Model.SortColumn';
searchModel.PrevSortColumn = ''; // Leave blank so sorting doesn't kick;
searchModel.CurrentPage = ToPage;
searchModel.PageSize = @Model.PageSize;
searchModel.SearchTerm = '@Model.SearchTerm';
searchModel.SearchFilter = '@Model.SearchFilter';
searchModel.SortDescending = '@Model.SortDescending';
searchModel.ActiveOnly = '@Model.ActiveOnly';
searchModel.RefId = @Model.RefId;
searchModel.RefUniqueId = '@Model.RefUniqueId';
$.ajax({
type: "POST",
url: _url,
async: true,
contentType: "application/json",
data: JSON.stringify(searchModel),
dataType: "html",
success: function (result, status, xhr) {
$("#gridPartial").html(result)
},
error: function (xhr, status, error) {
alert("Result: " + status + " " + error + " " + xhr.status + " " + xhr.statusText)
}
});
}
3. The last problem I ran into was boolean values. In the above example, the boolean value was coming from the Model, so there is no issue. However, if you are trying to get a boolean value from javascript or jquery, big problems. In order to be sure that what is being passed as a value can be deserialized into an object, you should have code like:
Lookup engine design can shave hours of off any enhancement and support issue your involved in!
Almost every database enabled project that I’ve written in my career had a requirement for lookup values. Enumerations are great when needed to pass parameters between methods, but when you need to display values to users, I wanted something more flexible. There are several things that usually come up as requirements:
I want to be able to use a list of values in drop downs
I want the values to be categorized
I want the values to be ordered by more than alphabetical
I want the values to have alternate values that can be used for decisions and code conversions
I want the values to be dynamic, so I can add and remove them at will
I want to be able to deactivate values as apposed to deleting them
I want to do all this without a software release
I don’t want to deploy values by using Identity Insert into different environments
I want a user interface to maintain the values
I want to be able to store different types of values, strings, dates, integers etc.
What typically happens, is a table of values is created, often with a few records that are for a specific purpose. These values are solely maintained by the DBA.
This usually forces a developer to do one of two things:
Use magic numbers
if (EmployeeTypeId == 1) { }
2. Use textual value
if (EmployeeType == "Supervisor") { }
In example one (1), using magic numbers is problematic. If there is ever a day that users can add values to this table, how do you guarantee that the unique Ids are synchronized between DEV, QA, UAT and Production?
In the second example (2), the business users will inevitably want to change the text value of an entry, thereby breaking code.
As an afterthought, a column will be added later to allow for an active (bit) flag of some sort.
Why should I care?
Not having a centralized lookup system means that the following things will occur:
Items will be added to this table in DEV and then not propagated uniformly, causing a failed deployment to production. I’ve spent many late nights troubleshooting deployment issues that turned out to be missing lookup values in a table.
Someone will put in a request to change a value in production and cause exceptions to be raised in the application. The development team is rarely consulted for changes like this.
The DBA will be strapped with babysitting these tables that are strewn about the database.
Lack of Foreign Keys will cause developers that are troubleshooting or enhancing to spend lots of time tracking down tables that have lookup values.
Developers may assume there is no lookup table and hard code the values they see elsewhere in their new pages. Then future developers that find the table, will see that there is a mismatch and have to spend extra time to rectify the issue.
Whenever I see pain points like this, I usually try to think of a solution that could resolve known issues, and as a plus, be generic enough to be re-usable. If there isn’t a coordinated effort, your project could wind up with two or more solutions for providing an engine to accomplish this.
The advantages of a unified method are many:
Support becomes easier, as all lists of values are handled the same way in code
All lists of values are stored in the database in the same format
An admin web page can easily be used to maintain all the lookup lists
The engine can have special functions built in
The engine and it’s abilities can be easily versioned
The engine can be used to convert values
I devised a simple table structure to be used for look-ups. It has been enhanced a little over the years, but has remained essentially unchanged for the past decade or so. That’s right, this simple solution has been able to solve all the lookup needs over the years without modification!
First, let me explain what really adds the power to this structure. There are two columns that are more important than any others:
lu_categories.catshortname
lu_items.lucode
When a new [lu_category] record is added, the [catshortname] is a value that is used in any code to pull back the category. So my typical implementation is that once a category is created, the [catshortname] should never change. The ONLY reason to use the [lu_category].[id] should be in the maintenance user interface, but never anywhere else. This solves the problem of synchronizing this table between environments.
Also,When a new [lu_item] record is added, it must Foreign Key (FK) back to the [lu_category].[id] column. The [lu_item].[lucode] is the way that this item is typically referred to in code. The [lu_item].[id] can, and should, be used as a foreign key in tables that use it. This can cause the dreaded issue of Primary Key coordination mentioned earlier if, for instance, other data is being moved from environment to environment that has an FK to this table, however, this issue would be there no matter what your solution, so this is not made better or worse by using this table.
You can use this to lookup values by creating an SQL function, and by using a Lookups business object in your C# application. I’ll be posting some of my Lookup object code in future posts.
In the above examples, there are two simple calls to get the luvalue for a record in the lu_items table using the categoryshortname and the lucode. This is a typical use case.
There are many that argue against this type of structure. Since there are two PK’s involved in the relationship makes proper FK’s impossible, so they say. For a typical application, there is only one FK required from the lu_items table. If you’re concerned that this Id might be for a value that is in the wrong category, then I believe your data issues are much more vast than worrying about these two tables. If you live in a world where you often have data corruption issues through mismatched FKs, then don’t use this solution. My only concern is that arguing against it in general is improper. This is a case where it can make so many lives simpler, even if there are edge cases where it is not optimal.
I’ve written a demo application using a .NET Core MVC application that not only shows the lookup system, but demos home grown paging as well. Feel free to download it an give it a try:
Thanks for reading, I hope this has given you some food for thought about designing engines, or micro services, that can be used throughout your organization to simplify development and support.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.