Jack Yasgar has been developing software for various industries for two decades. Currently, he utilizes C#, JQuery, JavaScript, SQL Server with stored procedures and/or Entity Framework to produce MVC responsive web sites that converse to a service layer utilizing RESTful API in Web API 2.0 or Microsoft WCF web services. The infrastructure can be internal, shared or reside in Azure.
Jack has designed dozens of relational databases that use the proper primary keys and foreign keys to allow for data integrity moving forward.
While working in a Scrum/Agile environment, he is a firm believer that quality software comes from quality planning. Without getting caught up in analysis paralysis, it is still possible to achieve a level of design that allows an agile team to move forward quickly while keeping re-work to a minimum.
Jack believes, “The key to long term software success is adhering to the SOLID design principles. Software written quickly, using wizards and other methods can impress the business sponsor / product owner for a short period of time. Once the honeymoon is over, the product owner will stay enamored when the team can implement changes quickly and fix bugs in minutes, not hours or days.”
Jack has become certified by the Object Management Group as OCUP II (OMG Certified UML Professional) in addition to his certification as a Microsoft Certified Professional. The use of the Unified Modeling Language (UML) provides a visual guide to Use Cases and Activities that can guide the product owner in designing software that meets the end user needs. The software development teams then use the same drawings to create their Unit Tests to make sure that the software meets all those needs.
The QA testing team can use the UML drawings as a guide to produce test cases. Once the software is in production, the UML drawings become a reference for business users and support staff to know what decisions are happening behind the scenes to guide their support efforts.
Starting around the August time frame, I started having an issue where I would start up Word or Excel office application and find that it had logged in with a random, usually wrong, email address.
I have an office subscription through my business and get updates to office apps periodically.
In my subscription, I have two domains, with several emails in each. For instance, on one domain, I have emails with:
jack@domain.com
support@domain.com
wifisupport@domain.com
On another domain, I have similar addresses:
jack@domain2.com
support@domain2.com
My computer account is logged into an old email address I used to have on live.com, Microsoft’s older, but still in use Azure Active Directory system.
So naturally, I have all these emails in my Outlook client. I have seen no documentation from Microsoft, but as a developer, it appears that the authentication method may have changed from having each office app handle the authentication internally, to having these credentials now cached in the Windows operating system. Since there is currently no way to mark any credential as “primary”, it seems that the office application may log in using the last credential it sees had been used.
Now, I’m not talking about the last credential you purposely used to log in somewhere. It seems that whatever order Outlook checked my emails could cause a shuffle in credential priority. The result is that if I start an Office application that is licensed using my jack@domain.com, it may automatically connect as wifisupport@domain.com and not allow me to edit my documents.
The main way to tell this is happening is that there is a “Privacy” popup that you’ll get when the app is trying to log in with a credential that does not have a license.
You’ll also get the following:
Sign in to get started with Office. Sign in or create account | I have a product key.
The problem I had, was when I clicked on the “Sign in or create account”, the legitimate account was not in the list. If I tried to log in as a different user, I received:
Error: Another account from your organization is already signed in on this device. Try again with a different account.
Well, this is a quandary.
I opened a ticket with Microsoft. I was told that “Yeah, lots of people are having this issue”. Well guess what, he didn’t have any simple resolution.
SOLVED:
Here’s what I found. You can’t really stop this from happening at this point. However, even though most Office apps will not let you change to the correct credential, Outlook will. So the current workaround is:
When you start your computer, always log into Outlook first
Check the credential by clicking on “File” > “Office Account”
Click on “Switch Account” if needed
Once you have Outlook set to the correct account, the other Office apps seem to take it’s lead.
I hope Microsoft will come up with a better solution than this in a future release.
.NET Core has built-in functionality that allows you to set up an environment variable called ASPNETCORE_ENVIRONMENT that tells the application which appsettings.json to use. I’ve tried to use this functionality on VMs in Azure with no luck. My applications seem to ignore this variable setting on VMs. It did work for me when deployed as an app service.
If you host your website on a shared server, you may not have any rights to create system environment variables.
This tutorial is using Visual Studio 2022.
When a .NET Core application first runs, in theory, it looks for the ASPNETCORE_ENVIRONMENT environment variable and uses that to select the proper appsettings.json to use to run the application. By default, a new application in Visual Studio will create three files:
appsettings.json
appsettings.development.json
appsettings.production.json
if the application doesn’t find the environment variable, it defaults to appsettings.production.json
Here’s the kicker, if the application doesn’t find the environment variable, it defaults to appsettings.production.json. This functionality is entirely unacceptable to me. If you deploy your app to a new server, would you want it to default to production values? I don’t.
Fortunately, there is a way to control all this behavior right from within your application code using web.config transformations. Since .NET Core doesn’t really depend on web.config files, many developers with only a few years experience may not even know about web.config transformations.
web.config Transformations
The web.config file is a file that is read by Internet Information Services (IIS) that is merged with the local maching.config file and tells IIS how to handle certain things. Often, if you make configuration changes in IIS, all it really does is update the web.config file for that application.
There is an XML schema definition called “XML-Document-Transform” that is prefixed with “xdt” which allows you to have different versions of your web.config file which can actually update values in the base web.config file when you deploy an application. Since most of us don’t use this file any longer, we usually don’t look at it, nor do we normally set up transformations. We’re going to use this functionality to control which appsettings.json our application uses based on which configuration we use to deploy.
Visual Studio Configuration Manager
At the top of the Visual Studio IDE, there is a drop-down for configuration. Typically, it will say “Debug”. If you click the drop-down arrow, you can select “Configuration Manager…”, or optionally, you can right-click on the project name and select the same entry.
The default configurations can be viewed in the “Active solution configuration:” drop-down. Usually it’s just “Debug” and “Release”.
Click the drop down and select “<New…>”.
Let’s name it “UAT”
Copy settings from: “Debug”
Create new project configurations should be checked
Click on “OK”
Click on “Close”
You’ll notice that your drop-down at the top of Visual Studio now says “UAT”.
appsettings.config files
Later versions of the .NET Core, post version 3.1 don’t seem to create the appsettings.Production.json file by default. However, if you have an older project that has been upgraded, the file may still exist. I’ll be using a project from a post on Lookups to demonstrate this technique.
In your main web application:
Right-click on your appsettings.json file and select “Copy”
Right-click on your project name and select “Paste”
Rename the file “appsettings – Copy.json” to “appsettings.UAT.json”
Your appsettings entries should now look like:
You can edit the appsettings.UAT.json file and modify any of the entries to match what you need for your UAT environment/server.
I highly recommend that you delete the appsettings.Production.json. I typically use an appsettings.PROD.json etc. This will prevent the application from accidentally defaulting to the production config when deploy to a new environment/server.
Creating the web.config transformation
Now let’s look at your current web.config file. There is a chance that your project doesn’t have one. Here is what my default one looks like, you can copy this into a new one you create, but make sure to update the arguments in the “aspNetCore” key.
This tells the runtime that when you run the application, it will use the “Development” configuration, which will default to picking up values in your appsettings.Development.json. This will work even though we didn’t create a configuration called Development, we just still have “Debug” for local running.
Visual Studio used to have the option to right-click your web.config file and select “Add Transformations”, but I don’t see it any longer in the latest version of Visual Studio 2022. So we’ll create the transform manually.
Right-click on the web.config and select “Copy”
Right-click on your project name and select “Paste”
Rename the “web – Copy.config” file to “web.UAT.config”
Double click the web.UAT.config file and modify the second line “configuration” to look like this:
This tells the build process to replace the value with the same name from the web.config with the value from this file.
Now, when you publish this project, you can select the “UAT” configuration and it will push a web.config with your project that automatically tells your application to use the values in the appsettings.UAT.json file. If you publish from the command line or through Jenkins, that command would looks like this:
dotnet publish –configuration UAT
The web.config in your publish location should have the line:
This post is mostly for me :-). Every time I create a new project, I have to hunt the internet to remind myself how to add an author to sparx enterprise architect.
If you’re like me, I always pull a Diagram Notes section onto my drawings for the metadata that it provides. By default, on windows, it will display the Author from some subset of your windows user id, which can be somewhat cryptic.
I don’t store my Project files in a global database. Instead, I create different project files in the Git repositories of the particular application that I’m designing for. This means that I’m periodically having to remember how to do this, which frustrates me when I can’t remember how. It’s not just me, the options are buried a little deep in the hierarchy.
This post shows how to do it from beginning to end using Sparx Enterprise Architect 16.0.x.
First, you need to add your new name as an Author.
Click on the “Settings” tab, then click the drop down in the “Reference Data” section called “Model Types”.
Then select “People” from the drop down.
In the “Name(s)” section, type the name that you would prefer over the default. Select a Role if you want one and you can even add notes if that is important. Then click on “Save”. Then you can click on “Close”.
Back at your diagram, right-click on any open space and select “Properties”.
It defaults to the General tab on the left. You can then change the Author from the drop down on this dialogue box.
You typically have to do this for every new drawing, but at least once you’ve added it to the list authors, it will stick for any drawing in the project.
Now, when you drag Diagram Notes onto your drawing, it will reflect your new Author Name
Deploying a router in a remote location means that you must support it with a VPN etc. You’ll need to have Dynamic DNS on your Mikrotik router in order to easily connect.
Deploying a router in a remote location means that you must support it with a VPN etc. You’ll need to have Dynamic DNS on your Mikrotik router in order to easily connect. Dynamic DNS is a service that many registrars offer that allows you to setup a system to update the public IP address of the router so that you can connect to it using a friendly name.
For this example, I’m going to use my registrar, Directnic, however, if your registrar offers Dynamic DNS, it probably works in a similar way, you’ll just have to follow the instructions on their site.
Above is a drawing of a typical simple setup where you may have a Mikrotik router hooked up to a cable modem at a remote location. The modem gets a public IP address from the Internet Service Provider (ISP) when it is connected. These IP addresses are usually not permanent and can change for various reasons. This makes it difficult to make sure you can connect to the router when you need to several days or months from now.
Let’s say that your modem at this location has a public IP address of 142.250.217.206. You have a VPN endpoint configured on your router on port 8443. In order to use an SSL VPN, you have to have a domain for which you create a certificate. So if you want to connect to the router, you might have to modify your host file on your local machine and put an entry like:
142.250.217.206 mysiterouter.seethesite.com
Then you can connect to the VPN using mysiterouter.seethesite.com:8443. This is a major cramp, especially if you have lots of sites and multiple people that need to connect.
The objective here is to create a DNS record for mysiterouter.seethesite.com and have it always updated with its public IP address, even if it changes.
Log into your registrar and edit the domain record for the domain you’re going to use for the dynamic DNS. I’m using Directnic.com in these examples.
You’ll want to create an A Record for the domain with the subdomain you’re going to use for connecting to the Router.
When you create the A Record, put in a random IP address. This way, you can see if the script on the router is working as expected by noting if the IP Address is updated to something else. Make sure that you have the Dynamic DNS option selected.
Once you add the record, it should look like:
Click on the little globe icon and it will display the link that you have to hit to update the IP Address.
The link will look something like this:
Copy this link to notepad, as we’ll need it for the future steps.
Now connect to your Mikrotik router using WinBox. Click on System > Scheduler and click on the plus icon to add a new scheduler record.
Give it a meaningful name. The Start Date/Time is not important, so long as they are in the past. Interval is in three sections: hh:mm:ss. I suggest setting it to something like 00:01:00 (every minute) while you’re testing it. Then change it later to 01:00:00 for a one hour interval. Leave all the check boxes as they are. It’s probably overkill for what we want, but I haven’t had time to find the minimum checkboxes needed.
Now we have to add the “On Event” section. You’ll use the URL that you copied from your registrar Dynamic DNS to include here. Your URL will look something like:
There are only two lines needed for the On Event script. The first will utilize a feature that is believe is only available ROS versions 6.x and higher.
:local publicip [/ip cloud get public-address]
This creates a local variable called “publicip” by using the /ip cloud command. The “get public-address” means that it will pull that property from the return object.
If you want to see what this object contains, in WinBox, you can click on “New Terminal” and type:
/ip cloud print
Now we need to replace the placeholder IP address in our registrar link and add it into the following command:
This command does a fetch on the URL. Notice after data=, I replaced the “8.8.8.8” with “$publicip” that will replace that value with the value retrieved by the /ip cloud command that was stored in the variable.
Remember, your URL may look vastly different, but the import part is to replace the IP Address with the variable name.
So your entire On Event should look something like:
:local publicip [/ip cloud get public-address]
/tool fetch url="https://directnic.com/dns/gateway/4c8b1f91928ca1937fe4d665cd5818f07cbca7f93c7fd84858591f55c302be2e/?data=$publicip" mode=https
Then click on “OK” to save it.
In WinBox, you can click on “Log”. You should eventually see entries with a Message that looks like:
fetch: file "?data=142.250.217.206" downloaded
The actual IP address in the message should by your public IP address as far as the Mikrotik router is concerned. If your router is not connected to the internet, the Scheduler might not run at all.
Once you see a message and it looks like the data is being show properly, go back to your registrar and refresh the screen on your DNS settings to look at your A Record. It should be updated from “8.8.8.8” to show your real public IP address.
The last step is to edit your Scheduler record and replace the interval with 01:00:00 so that it only runs once an hour. Having it ping every minute is too often and unecessary.
Detecting which button was clicked to cause a post back is very easy, once you know how to do it!
I often have screens that have multiple submit buttons on a single form that require a post back. I needed to pass the button value to the controller in MVC. For instance, I may have a data entry screen where a user can click a button that says “Save” or “Save as New”. I need to be able to detect which button they clicked when the page posts back.
Fortunately, there is an easy way to tell, or determine, which button the user selected when the page posts back on a submit action.
The buttons must be of type=submit. Having type=button won’t post back. You have a choice here, to use the value= or not use it. If you don’t declare a value attribute, then what you’ll receive in the controller is the text of the button. While this is okay, you or another developer may change the text in the future and not realize they are going to break your code. I recommend using the value= like I’ve used above. It’s less likely to change in the future.
I recommend using the value attribute as it’s less likely than the button text to change in the future.
The next most important part is the name attribute. Every button that will post back should have the same name. This will be the name of your parameter in your controller as well and they must match. The value you declare on the button will be the argument passed to your controller method.
public async Task<ActionResult> SearchBegin([FromForm] SearchPageModel _searchModelIn, [FromForm] string submitButton)
{
// If you "name" your buttons as "submitButton"
// then you can grab the value of the button
// here to make decisions on which button was clicked
switch (submitButton)
{
case "TopTen":
return TopTen(_searchModelIn);
case "Trad":
return Traditional(_searchModelIn);
default:
break;
}
return View("~/"); // Go home
}
The parameter name in your method must match the name attribute on your buttons exactly. The type passed in will be a string, although I imagine if your value attribute on all your buttons was numeric, that you could declare it as an int.
Once you’re in your method, you can use logic in a switch statement to detect the value passed in and make a decision how to proceed.
Using the Task Async/Await pattern for grabbing data can be a real performance enhancement. When you thread off the calls, it’s pretty normal to want Task return objects to be in one single usable collection. The example I can give is a method that needs to gather up several different categories of lookup items. These calls all return a collection of the same type.
When you await the tasks, you generally have a few options:
Not sure how you feel, but this is horrible. I’m sure I’ve done something like this in the past, but I’d prefer not to think about it.
Use WhenAll to retrieve them in an Array
The Task.WhenAll, when declared with a type, will return an array of the return type. So in this case, it would return an Array of List<LuItem>. We can then do a simple LINQ query to push them all into one collection.
In this example, we await the Task with WhenAll, which has a return type, as opposed to WaitAll which does not. As stated earlier, this example will return a collection as Task<List<LuItem>[]>. So we’re most of the way there. We use the ToList().ForEach LINQ query to transform the Array of Lists into a single list called _return.\
Summing a collection that is within a collection without using nested foreach loops can be easily done with LINQ
It’s hard to think of a good name for this post. But if you have a collection and each item has a collection of values that you need to get a sum on, you can do that easily with LINQ.
Say you have a List<CartItem> in a shopping cart. Each item has a list of DecimalCost, possibly the user has ordered different sizes or colors and they each have an associated cost.
I started receiving this error on the .XML file that I have included in my builds for APIs in order to enhance the Swagger descriptions. This file is enabled in the Build tab of the project properties.
This is using Visual Studio 2019 and publishing through Azure Pipelines.
This file will get created at the project root and in the bin folder when the project builds. It seems that .NET 6 build tools no longer likes having duplicate files.
Method
I have the properties of the xml documentation file in the project set to:
Build Action: None
Copy to Output Directory: Copy if newer
Then I added this to the main project file where the error is occuring in the top PropertyGroup section :
I’m sure we’ve all experienced the great idea of looping through a collection and trying to remove an item from a collection that doesn’t need to be there. You’ll get the infamous “Collection was modified; enumeration operation may not execute”. You can create a new collection and add the ones you want to that one, but that’s extra overhead.
Collection was modified; enumeration operation may not execute
This is a method that you can use that is outside of a foreach loop:
In this example, I have an email named “entity” with a “TheBody” property that has a collection of Elements. The Elements have two properties, “Key” and “Value”, basically like a Dictionary entry. Creating a new list of elements means a new List<EmailElement> and then a .Clear and .AddRange, which kills more CPU cycles and milliseconds.
However, executing the above line will remove all the items from the collection that meet the criteria in the lambda.
Port ##### is already being used by another application.
There are times when the random port selected for use by Visual Studio for IIS Express can cause an error of “The specified port is in use.” This could be because you have something installed on your device that is already using that port.
The specified port is in use. Port ##### is already being used by another application.
An error occurred launching IIS Express. Unable to launch the configured Visual Studio Development Web Server.Port ‘#####’ is in use.
In order to see if it’s really true, you can use the netsh CLI command line app. Run a command prompt and run this command:
netsh interface ipv4 show excludedportrange protocol=tcp
You’ll see a report such as this:
If the port you have setup in your launchSettings.json conflicts with one on this report, then change the value in your launch settings to a value no in this list and try again.
In the above example launchSettings.json, change the applicationUrl setting to have a port that is not in use, such as http://localhost:8081 etc. If you’re going to use higher ports, remember to do a search on the web, as some ports are reserved for specials purposes. We know many of them that you should avoid, even if they don’t show in the above list:
21 – FTP
22 – SSH
80 – Default Website
443 – Secure Website
etc.
Basically, avoid any ports with 3 numbers or less to be safe. As you can see above, I’ve found that ports in the 5000-5999 have the least potential for conflicts with other applications and network functions.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.